ur confidence in measurement often fails, and we reject
it. "Last night they got the elephant." Our favorite explanation
for such an event is to ascribe it to luck, good or bad as the case
may be.
If everything is a matter of luck, risk management is a meaningless exercise. Invoking luck obscures truth, because it separates an event from its cause.
When we say that someone has fallen on bad luck, we relieve that person of any responsibility for what has happened. When we say that someone has had good luck, we deny that person credit for the effort that might have led to the happy outcome. But how sure can we be? Was it fate or choice that decided the outcome?
Until we can distinguish between an event that is truly random and an event that is the result of cause and effect, we will never know whether what we see is what we'll get, nor how we got what we got. When we take a risk, we are betting on an outcome that will result from a decision we have made, though we do not know for certain what the outcome will be. The essence of risk management lies in maximizing the areas where we have some control over the outcome while minimizing the areas where we have absolutely no control over the outcome and the linkage between effect and cause is hidden from us.

Just what do we mean by luck? Laplace was convinced that there is no such thing as luck-or hazard as he called it. In his Essai philosophique sur les probabilite's, he declared:
Present events are connected with preceding ones by a tie based upon the evident principle that a thing cannot occur without a cause that produces it.... All events, even those which on account of their insignificance do not seem to follow the great laws of nature, are a result of it just as necessarily as the revolutions of the sun.'
This statement echoes an observation by Jacob Bernoulli that if all events throughout eternity could be repeated, we would find that every one of them occurred in response to "definite causes" and that even the events that seemed most fortuitous were the result of "a certain necessity, or, so to say, FATE." We can also hear de Moivre, submitting to the power of ORIGINAL DESIGN. Laplace, surmising that there was a "vast intelligence" capable of understanding all causes and effects, obliterated the very idea of uncertainty. In the spirit of his time, he predicted that human beings would achieve that same level of intelligence, citing the advances already made in astronomy, mechanics, geometry, and gravity. He ascribed those advances to "the tendency, peculiar to the human race [that] renders it superior to animals; and their progress in this respect distinguishes nations and ages and constitutes their true glory. "2
Laplace admitted that it is sometimes hard to find a cause where there seems to be none, but he also warns against the tendency to assign a particular cause to an outcome when in fact only the laws of probability are at work. He offers this example: "On a table, we see the letters arranged in this order, CONSTANTINOPLE, and we judge that this arrangement is not the result of chance. [Yet] if this word were not employed in any language we should not suspect it came from any particular cause."3 If the letters happened to be BZUXRQVICPRGAB, we would not give the sequence of letters a second thought, even though the odds on BZUXRQVICPRGAB's showing up in a random drawing are precisely the same as the odds on CONSTANTINOPLE's showing up. We would be surprised if we drew the number 1,000 out of a bottle containing 1,000 numbers; yet the probability of drawing 457 is also only one in a thousand. "The more extraordinary the event," Laplace concludes, "the greater the need of it being supported by strong proofs."4
In the month of October 1987, the stock market fell by more than 20%. That was only the fourth time since 1926 that the market had dropped by more than 20% in a single month. But the 1987 crash came out of nowhere. There is no agreement on what caused it, though theories abound. It could not have occurred without a cause, and yet that cause is obscure. Despite its extraordinary character, no one could come up with "strong proofs" of its origins.

Another French mathematician, born about a century after Laplace, gave further emphasis to the concept of cause and effect and to the importance of information in decision-making. Jules-Henri Poincare, (1854-1912) was, according to James Newman,
... a French savant who looked alarmingly like a French savant. He was short and plump, carried an enormous head set off by a thick spade beard and splendid mustache, was myopic, stooped, distraught in speech, absent-minded and wore pince-nez glasses attached to a black silk ribbon.5
Poincare was another mathematician in the long line of child prodigies that we have met along the way. He grew up to be the leading French mathematician of his time.
Nevertheless, Poincare made the great mistake of underestimating the accomplishments of a student named Louis Bachelier, who earned a degree in 1900 at the Sorbonne with a dissertation titled "The Theory of Speculation."6 Poincare, in his review of the thesis, observed that "M. Bachelier has evidenced an original and precise mind [but] the subject is somewhat remote from those our other candidates are in the habit of treating." The thesis was awarded "mention honorable," rather than the highest award of "mention tres honorable," which was essential for anyone hoping to find a decent job in the academic community. Bachelier never found such a job.
Bachelier's thesis came to light only by accident more than fifty years after he wrote it. Young as he was at the time, the mathematics he developed to explain the pricing of options on French government bonds anticipated by five years Einstein's discovery of the motion of electrons-which, in turn, provided the basis for the theory of the random walk in finance. Moreover, his description of the process of speculation anticipated many of the theories observed in financial markets today. "Mention honorable"!
The central idea of Bachelier's thesis was this: "The mathematical expectation of the speculator is zero." The ideas that flowed from that startling statement are now evident in everything from trading strategies and the use of derivative instruments to the most sophisticated techniques of portfolio management. Bachelier knew that he was onto something big, despite the indifference he was accorded. "It is evident," he wrote, "that the present theory solves the majority of problems in the study of speculation by the calculus of probability."
But we must return to Poincare, Bachelier's nemesis. Like Laplace, Poincare believed that everything has a cause, though mere mortals are incapable of divining all the causes of all the events that occur. "A mind infinitely powerful, infinitely well-informed about the laws of nature, could have foreseen [all events] from the beginning of the centuries. If such a mind existed, we could not play with it at any game of chance, for we would lose."7
To dramatize the power of cause-and-effect, Poincare suggests what the world would be like without it. He cites a fantasy imagined by Camile Flammarion, a contemporary French astronomer, in which an observer travels into space at a velocity greater than the speed of light:
[F] or him time would have changed sign [from positive to negative]. History would be turned about, and Waterloo would precede Austerlitz.... [A]ll would seem to him to come out of a sort of chaos in unstable equilibrium. All nature would appear to him delivered over to chance.8
But in a cause-and-effect world, if we know the causes we can predict the effects. So "what is chance for the ignorant is not chance for the scientist. Chance is only the measure of our ignorance."9
But then Poincare asks whether that definition of chance is totally satisfactory. After all, we can invoke the laws of probability to make predictions. We never know which team is going to win the World Series, but Pascal's Triangle demonstrates that a team that loses the first game has a probability of 22/64 of winning four games before their opponents have won three more. There is one chance in six that the roll of a single die will come up 3. The weatherman predicts today that the probability of rain tomorrow is 30%. Bachelier demonstrates that the odds that the price of a stock will move up on the next trade are precisely 50%. Poincare points out that the director of a life insurance company is ignorant of the time when each of his policyholders will die, but "he relies upon the calculus of probabilities and on the law of great numbers, and he is not deceived, since he distributes dividends to his stockholders."10
Poincare also points out that some events that appear to be fortuitous are not; instead, their causes stem from minute disturbances. A cone perfectly balanced on its apex will topple over if there is the least defect in symmetry; and even if there is no defect, the cone will topple in response to "a very slight tremor, a breath of air." That is why, Poincare explained, meteorologists have such limited success in predicting the weather:
Many persons find it quite natural to pray for rain or shine when they would think it ridiculous to pray for an eclipse.... [O)ne-tenth of a degree at any point, and the cyclone bursts here and not there, and spreads its ravages over countries it would have spared. This we could have foreseen if we had known that tenth of a degree, but ... all seems due to the agency of chance.11
Even spins of a roulette wheel and throws of dice will vary in response to slight differences in the energy that puts them in motion. Unable to observe such tiny differences, we assume that the outcomes they produce are random, unpredictable. As Poincare observes about roulette, "This is why my heart throbs and I hope everything from luck."12
Chaos theory, a more recent development, is based on a similar premise. According to this theory, much of what looks like chaos is in truth the product of an underlying order, in which insignificant perturbations are often the cause of predestined crashes and long-lived bull markets. The New York Times of July 10, 1994, reported a fanciful application of chaos theory by a Berkeley computer scientist named James Crutchfield, who "estimated that the gravitational pull of an electron, randomly shifting position at the edge of the Milky Way, can change the outcome of a billiard game on Earth."

Laplace and Poincare recognized that we sometimes have too little information to apply the laws of probability. Once, at a professional investment conference, a friend passed me a note that read as follows:
We can assemble big pieces of information and little pieces, but we can never get all the pieces together. We never know for sure how good our sample is. That uncertainty is what makes arriving at judgments so difficult and acting on them so risky. We cannot even be 100% certain that the sun will rise tomorrow morning: the ancients who predicted that event were themselves working with a limited sample of the history of the universe.
When information is lacking, we have to fall back on inductive reasoning and try to guess the odds. John Maynard Keynes, in a treatise on probability, concluded that in the end statistical concepts are often useless: "There is a relation between the evidence and the event considered, but it is not necessarily measurable."13
Inductive reasoning leads us to some curious conclusions as we try to cope with the uncertainties we face and the risks we take. Some of the most impressive research on this phenomenon has been done by Nobel Laureate Kenneth Arrow. Arrow was born at the end of the First World War and grew up in New York City at a time when the city was the scene of spirited intellectual activity and controversy. He attended public school and City College and went on to teach at Harvard and Stanford. He now occupies two emeritus professorships at Stanford, one in operations research and one in economics.
Early on, Arrow became convinced that most people overestimate the amount of information that is available to them. The failure of economists to comprehend the causes of the Great Depression at the time demonstrated to him that their knowledge of the economy was "very limited." His experience as an Air Force weather forecaster during the Second World War "added the news that the natural world was also unpredictable." 14 Here is a more extended version of the passage from which I quoted in the Introduction:
To me our knowledge of the way things work, in society or in nature, comes trailing clouds of vagueness. Vast ills have followed a belief in certainty, whether historical inevitability, grand diplomatic designs, or extreme views on economic policy. When developing policy with wide effects for an individual or society, caution is needed because we cannot predict the consequences."15
One incident that occurred while Arrow was forecasting the weather illustrates both uncertainty and the human unwillingness to accept it. Some officers had been assigned the task of forecasting the weather a month ahead, but Arrow and his statisticians found that their long-range forecasts were no better than numbers pulled out of a hat. The forecasters agreed and asked their superiors to be relieved of this duty. The reply was: "The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes."16
In an essay on risk, Arrow asks why most of us gamble now and then and why we regularly pay premiums to an insurance company. The mathematical probabilities indicate that we will lose money in both instances. In the case of gambling, it is statistically impossible to expect-though possible to achieve-more than a break-even, because the house edge tilts the odds against us. In the case of insurance, the premiums we pay exceed the statistical odds that our house will burn down or that our jewelry will be stolen.
Why do we enter into these losing propositions? We gamble because we are willing to accept the large probability of a small loss in the hope that the small probability of scoring a large gain will work in our favor; for most people, in any case, gambling is more entertainment than risk. We buy insurance because we cannot afford to take the risk of losing our home to fire-or our life before our time. That is, we prefer a gamble that has 100% odds on a small loss (the premium we must pay) but a small chance of a large gain (if catastrophe strikes) to a gam ble with a certain small gain (saving the cost of the insurance premium) but with uncertain but potentially ruinous consequences for us or our family.
Arrow won his Nobel Prize in part as a result of his speculations about an imaginary insurance company or other risk-sharing institution that would insure against any loss of any kind and of any magnitude, in what he describes as a "complete market." The world, he concluded, would be a better place if we could insure against every future possibility. Then people would be more willing to engage in risk-taking, without which economic progress is impossible.
Often we are unable to conduct enough trials or take enough samples to employ the laws of probability in making decisions. We decide on the basis of ten tosses of the coin instead of a hundred. Consequently, in the absence of insurance, just about any outcome seems to be a matter of luck. Insurance, by combining the risks of many people, enables each individual to enjoy the advantages provided by the Law of Large Numbers.
In practice, insurance is available only when the Law of Large Numbers is observed. The law requires that the risks insured must be both large in number and independent of one another, like successive deals in a game of poker.
"Independent" means several things: it means that the cause of a fire, for example, must be independent of the actions of the policyholder. It also means that the risks insured must not be interrelated, like the probable movement of any one stock at a time when the whole stock market is taking a nose dive, or the destruction caused by a war. Finally, it means that insurance will be available only when there is a rational way to calculate the odds of loss, a restriction that rules out insurance that a new dress style will be a smashing success or that the nation will be at war at some point in the next ten years.

Consequently, the number of risks that can be insured against is far smaller than the number of risks we take in the course of a lifetime. We often face the possibility that we will make the wrong choice and end up regretting it. The premium we pay the insurance company is only one of many certain costs we incur in order to avoid the possibility of a larger, uncertain loss, and we go to great lengths to protect ourselves from the consequences of being wrong. Keynes once asked, "[Why] should anyone outside a lunatic asylum wish to hold money as a store of wealth?" His answer: "The possession of actual money lulls our disquietude; and the premium we require to make us part with money is the measure of our disquietude."17
In business, we seal a deal by signing a contract or by shaking hands. These formalities prescribe our future behavior even if conditions change in such a way that we wish we had made different arrangements. At the same time, they protect us from being harmed by the people on the other side of the deal. Firms that produce goods with volatile prices, such as wheat or gold, protect themselves from loss by entering into commodity futures contracts, which enable them to sell their output even before they have produced it. They pass up the possibility of selling later at a higher price in order to avoid uncertainty about the price they will receive.
In 1971, Kenneth Arrow, in association with fellow economist Frank Hahn, pointed up the relationships between money, contracts, and uncertainty. Contracts would not be written in money terms "if we consider an economy without a past or a future."18 But the past and the future are to the economy what woof and warp are to a fabric. We make no decision without reference to a past that we understand with some degree of certainty and to a future about which we have no certain knowledge. Contracts and liquidity protect us from unwelcome consequences even when we are coping with Arrow's clouds of vagueness.
Some people guard against uncertain outcomes in other ways. They call a limousine service to avoid the uncertainty of riding in a taxi or taking public transportation. They have burglar alarm systems installed in their homes. Reducing uncertainty is a costly business.

Arrow's idea of a "complete market" was based on his sense of the value of human life. "The basic element in my view of the good society," he wrote, "is the centrality of others.... These principles imply a general commitment to freedom.... Improving economic status and opportunity ... is a basic component of increasing freedom. `9 But the fear of loss sometimes constrains our choices. That is why Arrow applauds insurance and risk-sharing devices like commodity futures contracts and public markets for stocks and bonds. Such facilities encourage investors to hold diversified portfolios instead of putting all their eggs in one basket.
There is a huge gap between Laplace and Poincare on the one hand and Arrow and his contemporaries on the other. After the catastrophe of the First World War, the dream vanished that some day human beings would know everything they needed to know and that certainty would replace uncertainty. Instead, the explosion of knowledge over the years has served only to make life more uncertain and the world more difficult to understand.
Seen in this light, Arrow is the most modem of the characters in our story so far. Arrow's focus is not on how probability works or how observations regress to the mean. Rather, he focuses on how we make decisions under conditions of uncertainty and how we live with the decisions we have made. He has brought us to the point where we can take a more systematic look at how people tread the path between risks to be faced and risks to be taken. The authors of the Port-Royal Logic and Daniel Bernoulli both sensed what lines of analysis in the field of risk might lie ahead, but Arrow is the father of the concept of risk management as an explicit form of practical art.
The recognition of risk management as a practical art rests on a simple cliche with the most profound consequences: when our world was created, nobody remembered to include certainty. We are never certain; we are always ignorant to some degree. Much of the information we have is either incorrect or incomplete.
Suppose a stranger invites you to bet on coin-tossing. She assures you that the coin she hands you can be trusted. How do you know whether she is telling the truth? You decide to test the coin by tossing it ten times before you agree to play.
When it comes up eight heads and two tails, you say it must be loaded. The stranger hands you a statistics book, which says that this lop-sided result may occur about one out of every nine times in tests of ten tosses each.
Though chastened, you invoke the teachings of Jacob Bernoulli and request sufficient time to give the coin a hundred tosses. It comes up heads eighty times! The statistics book tells you that the probability of getting eighty heads in a hundred tosses is so slight that you will have to count the number of zeroes following the decimal point. The probability is about one in a billion.
Yet you are still not 100% certain that the coin is loaded. Nor will you ever be 100% certain, even if you were to go on tossing it for a hundred years. One chance in a billion ought to be enough to convince you that this is a dangerous partner to play games with, but the possibility remains that you are doing the woman an injustice. Socrates said that likeness to truth is not truth, and Jacob Bernoulli insisted that moral certainty is less than certainty.
Under conditions of uncertainty, the choice is not between rejecting a hypothesis and accepting it, but between reject and not-reject. You can decide that the probability that you are wrong is so small that you should not reject the hypothesis. You can decide that the probability that you are wrong is so large that you should reject the hypothesis. But with any probability short of zero that you are wrong-certainty rather than uncertainty-you cannot accept a hypothesis.
This powerful notion separates most valid scientific research from hokum. To be valid, hypotheses must be subject to falsification-that is, they must be testable in such fashion that the alternative between reject and not-reject is clear and specific and that the probability is measurable. The statement "He is a nice man" is too vague to be testable. The statement "That man does not eat chocolate after every meal" is falsifiable in the sense that we can gather evidence to show whether the man has or has not eaten chocolate after every meal in the past. If the evidence covers only a week, the probability that we could reject the hypothesis (we doubt that he does not eat chocolate after every meal) will be higher than if the evidence covers a year. The result of the test will be not-reject if no evidence of regular consumption of chocolate is available. But even if the lack of evidence extends over a long period of time, we cannot say with certainty that the man will never start eating chocolate after every meal in the future. Unless we have spent every single minute of his life with him, we could never be certain that he has not eaten chocolate regularly in the past.
Criminal trials provide a useful example of this principle. Under our system of law, criminal defendants do not have to prove their innocence; there is no such thing as a verdict of innocence. Instead, the hypothesis to be established is that the defendant is guilty, and the prosecution's job is to persuade the members of jury that they should not reject the hypothesis of guilt. The goal of the defense is simply to persuade the jury that sufficient doubt surrounds the prosecution's case to justify rejecting that hypothesis. That is why the verdict delivered by juries is either "guilty" or "not guilty."

The jury room is not the only place where the testing of a hypothesis leads to intense debate over the degree of uncertainty that would justify rejecting it. That degree of uncertainty is not prescribed. In the end, we must arrive at a subjective decision on how much uncertainty is acceptable before we make up our minds.
For example, managers of mutual funds face two kinds of risk. The first is the obvious risk of poor performance. The second is the risk of failing to measure up to some benchmark that is known to potential investors.
The accompanying chart20 shows the total annual pretax rate of return (dividends paid plus price change) from 1983 through 1995 to a stockholder in the American Mutual Fund, one of the oldest and largest equity mutual funds in the business. The American Mutual performance is plotted as a line with dots, and the performance of the Standard & Poor's Composite Index of 500 Stocks is represented by the bars.

Although American Mutual tracks the S&P 500 closely, it had higher returns in only three out of the thirteen years-in 1983 and 1993, when American Mutual rose by more, and in 1990, when it fell by less. In ten years, American Mutual did about the same as or earned less than the S&P.
Was this just a string of bad luck, or do the managers of American Mutual lack the skill to outperform an unmanaged conglomeration of 500 stocks? Note that, since American Mutual is less volatile than the S&P, its performance was likely to lag in the twelve out of thirteen years in which the market was rising. The Fund's performance might look a lot better in years when the market was declining or not moving up or down.
Nevertheless, when we put these data through a mathematical stress test to determine the significance of these results, we find that American Mutual's managers probably did lack skill.21 There is only a 20% probability that the results were due to chance. To put it differently, if we ran this test over five other thirteen-year periods, we would expect American Mutual to underperform the S&P 500 in four of the periods.
Many observers would disagree, insisting that twelve years is too small a sample to support so broad a generalization. Moreover, a 20% probability is not small, though less than 50%. The current convention in the world of finance is that we should be 95% certain that something is "statistically significant" (the modern equivalent of moral certainty) before we accept what the numbers indicate. Jacob Bernoulli said that 1,000 chances out of 1,001 were required for one to be morally certain; we require only one chance in twenty that what we observe is a matter of chance.
But if we cannot be 95% certain of anything like this on the basis of only twelve observations, how many observations would we need? Another stress test reveals that we would need to track American Mutual against the S&P 500 for about thirty years before we could be 95% certain that underperformance of this magnitude was not just a matter of luck. As that test is a practical impossibility, the best judgment is that the American Mutual managers deserve the benefit of the doubt; their performance was acceptable under the circumstances.
The next chart shows a different picture. Here we see the relative performance of a small, aggressive fund called AIM Constellation. This fund was a lot more volatile during these years than either the S&P Index or American Mutual. Note that the vertical scale in this chart is twice the height of the vertical scale in the preceding chart. AIM had a disastrous year in 1984, but in five other years it outperformed the S&P 500 by a wide margin. The average annual return for AIM over the thirteen years was 19.8% as compared with 16.7% for the S&P 500 and 15.0% for American Mutual.

Is this record the result of luck or skill? Despite the wide spread in returns between AIM and the S&P 500, the greater volatility of AIM makes this a tough question to answer. In addition, AIM did not track the S&P 500 as faithfully as American Mutual did: AIM went down one year when the S&P 500 was rising, and it earned as much in 1986, as in 1985, as the S&P was earning less. The pattern is so irregular that we would have a hard time predicting this fund's performance even if we were smart enough to predict the returns on the S&P 500.
Because of the high volatility and low correlation, our mathematical stress test reveals that luck played a significant role in the AIM case just as in the American Mutual case. Indeed, we would need a track record exceeding a century before we could be 95% certain that these AIM results were not the product of luck! In risk-management terms, there is a suggestion here that the AIM managers may have taken excessive risk in their efforts to beat the market.

Many anti-smokers worry about second-hand smoke and support efforts to making smoking in public places illegal. How great is the risk that you will develop lung cancer when someone lights up a cigarette at the next table in a restaurant or in the next seat on an airplane? Should you accept the risk, or should you insist that the cigarette be extinguished immediately?
In January 1993, the Environmental Protection Administration issued a 510-page report carrying the ominous title Respiratory Health Effects of Passive Smoking: Lung Cancer and Other Disorders.22 A year later, Carol Browner, the EPA Administrator, appeared before a congressional committee and urged it to approve the Smoke-Free Environment Act, which establishes a complex set of regulations designed to prohibit smoking in public buildings. Browner stated that she based her recommendation on the report's conclusion that environmental tobacco smoke, or ETS, is "a known human lung carcinogen."23
How much is "known" about ETS? What is the risk of developing lung cancer when someone else is doing the smoking?
There is only one way even to approach certainty in answering these questions: Check every single person who was ever exposed to ETS at any moment since people started smoking tobacco hundreds of years ago. Even then, a demonstrated association between ETS and lung cancer would not be proof that ETS was the cause of the cancer.
The practical impossibility of conducting tests on everybody or everything over the entire span of history in every location leaves all scientific research results uncertain. What looks like a strong association may be nothing more than the luck of the draw, in which case a different set of samples from a different time period or from a different locale, or even a different set of subjects from the same period and the same locale, might have produced contrary findings.
There is only one thing we know for certain: an association (not a cause-and-effect) between ETS and lung cancer has a probability that is some percentage short of 100%. The difference between 100% and the indicated probability reflects the likelihood that the ETS has nothing whatsoever to do with causing lung cancer and that similar evidence would not necessarily show up in another sample. The risk of coming down with lung cancer from ETS boils down to a set of odds, just as in a game of chance.
Most studies like the EPA analysis compare the result when one group of people is exposed to something, good or bad, with the result from a "control" group that is not exposed to the same influences. Most new drugs are tested by giving one group the drug in question and comparing their response with the response of a group that has been given a placebo.
In the passive smoking case, the analysis focused on the incidence of lung cancer among non-smoking women living with men who smoked. The data were then compared with the incidence of disease among the control group of non-smoking women living with nonsmoking companions. The ratio of the responses of the exposed group to the responses of the control group is called the test statistic. The absolute size of the test statistic and the degree of uncertainty surrounding it form the basis for deciding whether to take action of some kind. In other words, the test statistic helps the observer to distinguish between CONSTANTINOPLE and BZUXRQVICPRGAB and cases with more meaningful results. Because of all the uncertainties involved, the ultimate decision is often more a matter of gut than of measurement, just as it is in deciding whether a coin is fair or loaded.
Epidemiologists-the statisticians of health-observe the same convention as that used to measure the performance of investment managers. They usually define a result as statistically significant if there is no more than a 5% probability that an outcome was the result of chance.
The results of the EPA study of passive smoking were not nearly as strong as the results of the much larger number of earlier studies of active smoking. Even though the risk of contracting lung cancer seemed to correlate well with the amount of exposure-how heavily the male companion smoked-the disease rates among women exposed to ETS averaged only 1.19 times higher than among women who lived with non-smokers. Furthermore, this modest test statistic was based on just thirty studies, of which six showed no effect from ETS. Since many of those studies covered small samples, only nine of them were statistically significant.24 None of the eleven studies conducted in the United States met that criterion, but seven of those studies covered fewer than forty-five cases.25
In the end, admitting that "EPA has never claimed that minimal exposure to secondhand smoke poses a huge individual cancer risk,"26 the agency estimated that "approximately 3,000 American nonsmokers die each year from lung cancer caused by secondhand smoke."27 That conclusion prompted Congress to pass the Smoke-Free Environment Act, with its numerous regulations on public facilities.

We have reached the point in the story where uncertainty, and its handmaiden luck, have moved to center stage. The setting has changed, in large part because in the 75 years or so since the end of the First World War the world has faced nearly all the risks of the old days and many new risks as well.
The demand for risk management has risen along with the growing number of risks. No one was more sensitive to this trend than Frank Knight and John Maynard Keynes, whose pioneering work we review in the next chapter. Although both are now dead-their most important writings predate Arrow's-almost all the figures we shall meet from now on are, like Arrow, still alive. They are testimony to how young the ideas of risk management are.
The concepts we shall encounter in the chapter ahead never occurred to the mathematicians and philosophers of the past, who were too busy establishing the laws of probability to tackle the mysteries of uncertainty.
