Cambridge Catalogue  
  • Help
Home > Catalogue > The Escape from Hunger and Premature Death, 1700–2100
The Escape from Hunger and Premature Death, 1700–2100


  • 19 b/w illus. 15 tables
  • Page extent: 216 pages
  • Size: 228 x 152 mm
  • Weight: 0.49 kg

Library of Congress

  • Dewey number: 304.6/4
  • Dewey version: 22
  • LC Classification: HD9000.5 .F544 2004
  • LC Subject headings:
    • Food supply--History
    • Malnutrition--History
    • Medical care--History
    • Mortality--History

Library of Congress Record


 (ISBN-13: 9780521808781 | ISBN-10: 0521808782)


The Persistence of Misery in Europe and America before 1900

the twentieth century saw major improvements in the human condition, not only in the rich countries of the world but also in developing nations. Nothing has been more remarkable, however, than the extension of life expectancy, which has increased by about 30 years since 1900 in England, France, and the United States and in equal or larger amounts in such countries as India, China, and Japan. Among the nations of the Third World, the rate of improvement has been nearly twice as fast as among the nations in the Organization for Economic Cooperation and Development (OECD) (see Table 1.1).

   What is responsible for this unanticipated extension of human life? That question has occupied some of the best minds of the past century in both the social sciences and the biomedical sciences, and it is also the central question of these chapters. The drive to explain the secular decline in mortality did not begin until about World War Ⅰ because it was uncertain before that time whether such a decline was in progress. There was little evidence in the first four official English life tables covering the years 1831–80 of a downward trend in mortality. Although the signs of improvement in life expectancy became more marked when the fifth and sixth tables were constructed, covering the 1880s and 1890s, few epidemiologists or demographers recognized that England was in the midst of a secular decline in mortality that had begun about the second quarter of the eighteenth century and that would more than double life expectancy at birth before the end of the twentieth century. During the last decade of the nineteenth century and the early years of the twentieth century, attention was focused not on the small decline in aggregate mortality, but on the continuing large differentials between urban and rural areas, between low- and high-income districts, and among different nations.1

Table 1.1 Life Expectancy at Birth in Seven Nations, 1725–2100 (both sexes combined)

       1725    1750    1800    1850    1900    1950    1990    2050?    2100?

   England or UK    32    37    36    40    48    69    76        
   France        26    33    42    46    67    77        
   U.S.    50    51    56    43    48    68    76    (87)    (98)
   Egypt                        42    60        
   India                    27    39    59        
   China                        41    70        
   Japan                        61    79        

Sources: For England 1725–1850: Wrigley and Schofield 1981; 1900: average of figures for 1896 and 1905 in Case et al. 1962. For France 1750: computed from Tables 13 and 14 for 1740–49 in Blayo 1975a, p. 140; for 1800, 1850, and 1900: Bourgeois-Pichat 1965, pp. 504–5 (figures for 1805–7, 1850–52, and 1900–2). For the United States 1725–1850: Fogel 1986, p. 511 (males only; shifted to e0° using Coale and Demeny 1966, West life tables); for 1900: Bell, Wade, and Goss 1992. For India 1900: Carr-Saunders 1964 (figure is for 1931). For all countries 1950: Keyfitz and Flieger 1990; for 1990: World Bank 1990, 1992. Figures in parentheses for 2050 and 2100 are projections for these years based on the analysis of Oeppen and Vaupel (2002).

   The improvements in life expectancy between 1900 and 1920 were so large, however, that it became obvious that the changes were not just a random perturbation or cyclical phenomenon. Similar declines recorded in the Scandinavian countries, France, and other European nations made it clear that the West, including Canada and the United States, had attained levels of survival far beyond previous experience and far beyond those that prevailed elsewhere in the world.2

   The drive to explain the secular decline in mortality pushed research in three directions. Initially, much of this effort revolved around the construction of time series of birth and death rates that extended as far back in time as possible in order to determine just when the decline in mortality began. Then, as data on mortality rates became increasingly available, they were analyzed in order to determine factors that might explain the decline as well as to establish patterns or laws that would make it possible to predict the future course of mortality.

   Somewhat later, efforts were undertaken to determine the relationship between the food supply and mortality rates. Between the two world wars, the emerging science of nutrition focused on a series of diseases related to specific nutritional deficiencies. In 1922 shortages in vitamin D were shown to cause rickets. In 1933 thi- amine deficiency was identified as the cause of beriberi, and in 1937 inadequate niacin was shown to cause pellagra.3 Although the energy required for basal metabolism (the energy needed to maintain vital functions when the body is completely at rest) had been estimated at the turn of the century, it was not until after World War Ⅱ that estimates of caloric requirements for specific activities were worked out. During the three decades following World War Ⅱ, research in nutritional sciences conjoined with new findings in physiology to demonstrate a previously unknown synergy between nutrition and infection and to stimulate a series of studies, still ongoing, of numerous and complex routes through which nutrition affects virtually every vital organ system.4

   The effort to develop time series of mortality rates also took an enormous leap forward after World War Ⅱ. Spurred by the development of high-speed computers, historical demographers in France and England developed new time series on mortality from baptismal and burial records that made it possible to trace changing mortality from 1541 in the case of England and from 1740 in the case of France.5

   Two other critical sources of data became available during the 1970s and 1980s. One was food-supply estimates that were developed in France as a by-product of the effort to reconstruct the pattern of French economic growth from the beginning of the Industrial Revolution. Once constructed, the agricultural accounts could be converted into estimates of the output of calories and other nutrients available for human consumption through a technique called “National Food Balance Sheets.” Such estimates are currently available for France more or less by decade from 1785 down to the present. In Great Britain the task of reconstructing the growth of the food supply was more arduous, but estimates of the supply of food are now available by half century from 1700 to 1850 and by decade for much of the twentieth century.6

   The other recent set of time series pertains to physique or body builds – height, weight, and other anthropometric (bodily) measures. The systematic recording of information on height was initially an aspect of the development of modern armies, which began to measure the height of recruits as early as the beginning of the eighteenth century in Sweden and Norway and the middle of the eighteenth century in Great Britain and its colonies such as those in North America. The measurement of weight did not become widespread in armies until the late 1860s, after the development of platform scales. However, there are scattered samples of weights that go back to the beginning of the nineteenth century. During the 1960s and 1970s, recognition that data on body builds were important indicators of health and mortality led to the systematic recovery of this information by economic and social historians seeking to explain the secular decline in mortality.7

   These rich new data sources supplemented older economic time series, especially those on real wages (which began to be constructed late in the nineteenth century) and real national income (which were constructed for OECD nations mainly between 1930 and 1960). These new sources of information about human welfare, together with advances in nutritional science, physiology, demography, and economics, form the background for these chapters. Before plunging into my own analysis and interpretation of this evidence, however, I want to summarize the evolution of thought about the causes of the secular decline in mortality.

   Between the late 1930s and the end of the 1960s a consensus emerged on the explanation for the secular trend. A United Nations study published in 1953 attributed the trend in mortality to four categories of advances: (1) public health reforms, (2) advances in medical knowledge and practices, (3) improved personal hygiene, and (4) rising income and standards of living. A United Nations study published in 1973 added “natural factors,” such as the decline in the virulence of pathogens, as an additional explanatory category.8

   A new phase in the effort to explain the secular decline in mortality was ushered in by Thomas McKeown, who, in a series of papers and books published between 1955 and the mid-1980s, challenged the importance of most of the factors that previously had been advanced for the first wave of the mortality decline. He was particularly skeptical of those aspects of the consensus explanation that focused primarily on changes in medical technology and public health reforms. In their place he substituted improved nutrition, but he neglected the synergism between infection and nutrition and so failed to distinguish between diet and nutrients available for cellular growth. McKeown did not make his case for nutrition directly but largely through a residual argument after having rejected other principal explanations. The debate over the McKeown thesis continued through the beginning of the 1980s.9 However, during the 1970s and 1980s, it was overtaken by the growing debate over whether the elimination of mortality crises was the principal reason for the first wave of the mortality decline, which extended from roughly 1725 to 1825.

   The systematic study of mortality crises and their possible link to famines was initiated by Jean Meuvret in 1946. Such work was carried forward in France and numerous other countries on the basis of local studies that made extensive use of parish records. By the early 1970s, scores of such studies had been published covering the period from the seventeenth through the early nineteenth centuries in England, France, Germany, Switzerland, Spain, Italy, and the Scandinavian countries. The accumulation of local studies provided the foundation for the view that mortality crises accounted for a large part of total mortality during the early modern era, and that the decline in mortality rates between the mid-eighteenth and mid-nineteenth centuries was explained largely by the elimination of these crises, a view that won widespread if not universal support.10

   Only after the publication of death rates based on large representative samples of parishes for England and France did it become possible to assess the national impact of crisis mortality on total national mortality. Figure 1.1 displays the time series that emerged from these studies. Analyses of these series confirmed one of the important conclusions derived from the local studies: Mortality was far more variable before 1750 than afterward. They also revealed that the elimination of crisis mortality, whether related to famines or not, accounted for only a small fraction of the secular decline in mortality rates. About 90 percent of the drop was due to the reduction of “normal” mortality.11

   In discussing the factors that had kept past mortality rates high, the authors of the 1973 United Nations study of population noted that “although chronic food shortage has probably been more deadly to man, the effects of famines, being more spectacular, have received greater attention in the literature.”12 Similar points were made by several other scholars, but it was not until the publication of the Institut national d’études démographiques data for France and the E. A. Wrigley and R. S. Schofield data for England that the limited influence of famines on mortality became apparent. In chapter 9 of the Wrigley and Schofield volume, Ronald Lee demonstrated that although there was a statistically significant lagged relationship between large proportionate deviations in grain prices and similar deviations in mortality, the net effect on mortality after five years was negligible.13 Similar results were reported in studies of France and the Scandinavian countries.14

   The current concern with the role of chronic malnutrition in the secular decline of mortality does not represent a return to the belief that the entire secular trend in mortality can be attributed to a single overwhelming factor. Specialists currently working on the problem agree that a range of factors is involved, although they have different views on the relative importance of each factor. The unresolved issue, therefore, is how much each of the various factors contributed to the decline. Resolution of the issue is essentially an accounting exercise of a particularly complicated nature that involves measuring not only the direct effect of particular factors but also their indirect effects and their interactions with other factors. I now consider some of the new data sources and new analytical techniques that have recently been developed to help resolve this accounting problem.15

Figure 1.1 Secular Trends in Mortality Rates in England and France.
(a) England 1541–1975. (b) France 1740–1974.
Note: CDR = crude death rate, which is computed as the total deaths in a given year divided by the midyear population and multiplied by 1,000. Each diagram shows the scatter of annual death rates around a 25-year moving average. On sources and procedures, see Fogel 1992, notes to Table 9.1.

The Dimensions of Misery during the Eighteenth and Nineteenth Centuries

It is now clear that although the period from the middle of the eighteenth century to the end of the nineteenth has been hailed justly as an industrial revolution, as a great transformation in social organization, and as a revolution in science, these great advances brought only modest and uneven improvements in the health, nutritional status, and longevity of the lower classes before 1890. Whatever contribution the technological and scientific advances of the eighteenth and nineteenth centuries may have made ultimately to this breakthrough, escape from hunger and high mortality did not become a reality for most ordinary people until the twentieth century.

   This point can be demonstrated by looking first at the amount of food available to the typical worker in England and France during the eighteenth and early nineteenth centuries. Because at that time food constituted between 50 and 75 percent of the expenditures of laboring families, improvement in the conditions of their lives should have been evident in their diets. However, Table 1.2 shows that the energy value of the typical diet in France at the start of the eighteenth century was as low as that of Rwanda in 1965, the most malnourished nation for that year in the tables of the World Bank. England’s supply of food per capita exceeded that of France by several hundred calories but was still exceedingly low by current standards. Indeed, as late as 1850, the English availability of calories hardly matched the current Indian level.

Table 1.2 Secular Trends in the Daily
Caloric Supply in France and Great Britain,
1700–1989 (calories per capita)

   Year    France    Great Britain

   1700        2,095
   1705    1,657    
   1750        2,168
   1785    1,848    
   1800        2,237
   1803–12    1,846    
   1845–54    2,480    
   1850        2,362
   1909–13        2,857
   1935–39    2,975    
   1954–55    2,783    3,231
   1961        3,170
   1965    3,355    3,304
   1989    3,465    3,149

Source: Fogel, Floud, and Harris, n.d.

   The supply of food available to ordinary French and English families between 1700 and 1850 was not only meager in amount but also relatively poor in quality. In France between 1700 and 1850, for example, the share of calories from animal foods was less than half of the modern share, which is about one-third in rich nations. In 1750 about 20 percent of English caloric consumption was from animals. That figure rose to between 25 and 30 percent in 1750 and 1800, suggesting that the quality of the English diet increased more rapidly than that of the French during the eighteenth century. However, although the English were able to increase their diet in bulk, its quality subsequently diminished, with the share of calories from animals falling back to 20 percent in 1850.16

   One implication of these low-level diets needs to be stressed: Even prime-age males had only a meager amount of energy available for work. By work I mean not only the work that gets counted in national income and product accounts (which I will call “NIPA work”), but also all activity that requires energy over and above baseline maintenance. Baseline maintenance has two components. The larger component is the basal metabolic rate (or BMR), which accounts for about four-fifths of baseline maintenance. It is the amount of energy needed to keep the heart and other vital organs functioning when the body is completely at rest. It is measured when an individual is at complete rest, about 12 to 14 hours after the last meal.17 The other 20 percent of baseline maintenance is the energy needed to eat and digest food and for vital hygiene. It does not include the energy needed to prepare a meal or to clean the kitchen afterward.

   It is important to keep in mind that not all goods and services produced in a society are included in the NIPA. When the NIPA were first designed in the early 1930s, they were intended to measure mainly goods and services traded in the market. It was, for example, recognized that many important contributions to the economy, such as the unpaid labor of housewives, would not be measured by the NIPA. However, the neglect of nonmarket activities was to a large extent made necessary by the difficulty in measuring them given the quantitative techniques of the time. Moreover, with a quarter of the labor force unemployed in 1932, Congress was most concerned about what was happening to market employment. It was also assumed that the secular trend in the ratio of market to nonmarket work was more or less stable. This last assumption turned out to be incorrect. Over time, NIPA work has become a smaller and smaller share of total activities. Furthermore, we now have the necessary techniques to provide fairly good estimates of nonmarket activities. Hence in these chapters I will attempt to estimate the energy requirements of both market and nonmarket work.

Table 1.3 A Comparison of Energy Available for Work
Daily per Consuming Unit in France, England and Wales,
and the United States, 1700–1994 (in kcal)

   (1)    (2)    (3)
   Year    France    England and Wales    United States

   1700        720    2,313a
   1705    439        
   1750        812    
   1785    600        
   1800        858    
   1840            1,810
   1850        1,014      
   1870    1,671          
   1880            2,709
   1944            2,282
   1975    2,136          
   1980        1,793      
   1994            2,620

a Prerevolutionary Virginia.
Source: Fogel, Floud, and Harris, n.d.

   Dietary energy available for work is a residual. It is the amount of energy metabolized (chemically transformed for use by the body) during a day, less baseline maintenance. Table 1.3 shows that in rich countries today, around 1,800 to 2,600 calories of energy are available for work to an adult male aged 20–39. Note that calories for females, children, and the aged are converted into equivalent males aged 20–39, called “consuming units,” to standardize the age and sex distributions of each population. This means that if females aged 15–19 consume on average 0.78 of the calories consumed on average by males aged 20–39, they are considered 0.78 of a male aged 20–39, insofar as caloric consumption is concerned, or 78 percent of a consuming unit.

printer iconPrinter friendly version AddThis