Free Novel Read

Inventing Temperature Page 10


  2.

  If causes of variation cannot be easily eliminated but can be identified and quantified, learn to make corrections. Since it can be tedious to wait for the atmosphere to reach exactly the standard pressure, it was more convenient to create a formula for making corrections for different pressure values. Similarly, the Royal Society committee adopted an empirical formula for correcting the boiling point depending on the depth of water below which the thermometer bulb was plunged. The correction formulas allowed the variations to be handled in a controlled way, though they were not eliminated.

  3.

  Ignore small, inexplicable variations, and hope that they will go away. Perhaps the most significant case in this vein was the variation of temperature depending on the "degree of boiling," which was widely

  55. Fixed points can be artificially created in the same way seedless watermelons can be created; these things cannot be made if nature will not allow them, but nonetheless they are our creations.

  end p.49

  reported by reputable observers such as Newton, Adams, and De Luc. But somehow this effect was no longer observed from the nineteenth century onward. Henry Carrington Bolton (1900, 60) noticed this curious fact and tried to explain away the earlier observations as follows: "We now know that such fluctuations depend upon the position of the thermometer (which must not be immersed in the liquid), on the pressure of the atmosphere, on the chemical purity of the water, and on the shape of the vessel holding it, so it is not surprising that doubts existed as to the constancy of the phenomenon [boiling temperature]." That explanation is not convincing in my view, as it amounts to sweeping the dirt under the rug of other causes of variation.56 Other mysterious variations also disappeared in time, including the inexplicable day-to-day variations and the temperature differences depending on the depth of water below the thermometer bulb, both reported by the Royal Society committee. In all those cases the reported variations fell away, for no obvious reason. If the variations do disappear, there is little motivation to worry about them.

  In his insightful article on the principles of thermometry, Carl B. Boyer remarked (1942, 180): "Nature has most generously put at the scientist's disposal a great many natural points which serve to mark temperatures with great accuracy." (That is reminiscent of the cryptic maxim from Wittgenstein [1969, 66, §505]: "It is always by favour of Nature that one knows something.") We have now seen that nature has perhaps not been as generous as Boyer imagined, since the fixed points were quite hard won. However, nature had to be generous in some ways if scientists were going to have any success in their attempts to tame the hopeless variations. In "Escape from Superheating" and "A Dusty Epilogue," I have already indicated that it was good fortune, or rather serendipity, that allowed the higher-than-expected degrees of stability that the boiling point and the steam point enjoyed. Serendipity is different from random instances of plain dumb luck because it is defined as "the faculty of making fortunate discoveries by accident." The term was coined by Horace Walpole (British writer and son of statesman Robert) from the Persian tale The Three Princes of Serendip, in which the heroes possessed this gift.57 In the following discussion I will use it to indicate a lucky coincidence that results in a tendency to produce desirable results.

  The most important serendipitous factor for the fixity of the boiling point is the fact that water on earth normally contains a good deal of dissolved air. For the steam point, it is the fact that air on earth normally contains a good deal of suspended dust. It is interesting to consider how thermometry and the thermal sciences might have developed in an airless and dustless place. To get a sense of the contingency, recall Aitken's speculations about the meteorology of a dustless world,

  56. Still, Bolton deserves credit for trying—I know of no other attempts to explain the disappearance of this variation.

  57. Collins English Dictionary (London: HarperCollins, 1998).

  end p.50

  quoted at more length in "A Dusty Epilogue": "[I]f there was no dust in the air there would be no fogs, no clouds, no mists, and probably no rain. …" Possible worlds aside, I will concentrate here on the more sober task of elucidating the implications of the serendipity that we earthbound humans have actually been blessed with.

  One crucial point about these serendipitous factors is that their workings are independent of the fortunes of high-level theory. There were a variety of theoretical reasons people gave for using the steam point. Cavendish thought that bubbles of steam rising through water would be cooled down exactly to the "boiling point" in the course of their ascent, due to the loss of heat to the surrounding water. He also postulated that the steam could not cool down below the boiling point once it was released. De Luc did not agree with any of that, but accepted the use of the steam point for phenomenological reasons. Biot (1816, 1:44-45) agreed with the use of the steam point, but for a different reason. He thought that the correct boiling point should be taken from the very top layer of boiling water, which would require the feat of holding a thermometer horizontally right at the top layer of boiling water; thankfully steam could be used instead, because the steam temperature should be equal to the water temperature at the top. Marcet (1842, 392) lamented the wide acceptance of the latter assumption, which was in fact incorrect according to the results of his own experiments. Never mind why—the steam point was almost universally adopted as the better fixed point and aided the establishment of reliable thermometry.

  There were also different views about the real mechanism by which the air dissolved in water tended to promote boiling. There was even Tomlinson's view that air was only effective in liberating vapor because of the suspended dust particles in it (in which case dust would be doing a double duty of serendipity in this business). Whatever the theory, it remained a crucial and unchallenged fact that dissolved air did have the desirable effect. Likewise, there were difficult theoretical investigations going on around the turn of the century about the mechanism of vapor formation; regardless of the outcome of such investigations, for thermometry it was sufficient that dust did stabilize the steam point by preventing supersaturation.

  Thanks to their independence from high-level theory, the serendipitously robust fixed points survived major changes of theory. The steam point began its established phase in the Royal Society committee's recommendation in 1777, when the caloric theory of heat was just being crafted. The acceptance of the steam point continued through the rise, elaboration, and fall of the caloric theory. It lasted through the phenomenological and then molecular-kinetic phases of thermodynamics. In the late nineteenth century Aitken's work would have created new awareness that it was important not to eliminate all the dust from the air in which the steam was kept, but the exact theoretical basis for that advice did not matter for thermometry. The steam point remained fixed while its theoretical interpretations and justifications changed around. The robustness of the fixed points provided stability to quantitative observations, even as the theoretical changes effected fundamental changes in the very meaning of those observations. The same numbers could remain, whatever they "really meant."

  If this kind of robustness is shared in the bases of other basic measurements in the exact sciences, as I suspect it is, there would be significant implications for the

  end p.51

  persistence and accumulation of knowledge. Herbert Feigl emphasized that the stability of empirical science lies in the remarkable degree of robustness possessed by certain middle-level regularities, a robustness that neither sense-data nor high-level theories can claim: "I think that a relatively stable and approximately accurate basis—in the sense of testing ground—for the theories of the factual sciences is to be located not in individual observations, impressions, sense-data or the like, but rather in the empirical, experimental laws" (Feigl 1974, 8). For example, weight measurements using balances rely on Archimedes's law of the lever, and observations made with basic optical instruments rely on Snell's law of refraction. These laws, at least in the contexts
of the measurements they enable, have not failed and have not been questioned for hundreds of years. The established fixed points of thermometry also embody just the sort of robustness that Feigl valued so much, and the insights we have gained in the study of the history of fixed points shed some light on how it is that middle-level regularities can be so robust.

  Although it is now widely agreed that observations are indeed affected by the theories we hold, thanks to the well-known persuasive arguments to that effect by Thomas Kuhn, Paul Feyerabend, Mary Hesse, Norwood Russell Hanson, and even Karl Popper, as well as various empirical psychologists, we must take seriously Feigl's point that not all observations are affected in the same way by paradigm shifts or other kinds of major theoretical changes. No matter how drastically high-level theories change, some middle-level regularities may remain relatively unaffected, even when their deep theoretical meanings and interpretations change significantly. Observations underwritten by these robust regularities will also have a fair chance of remaining unchanged across revolutionary divides, and that is what we have seen in the case of the boiling/steam point of water.

  The looseness of the link between high-level theories and middle-level regularities receives strong support in the more recent works by Nancy Cartwright (on fundamental vs. phenomenological laws), Peter Galison (on the "intercalation" of theory, experiment, and instrumentation), and Ian Hacking (experimental realities based on low-level causal regularities).58 Regarding the link between the middle-level regularities and individual sense-observations, James Woodward and James Bogen's work on the distinction between data and phenomena reinforces Feigl's argument; stability is found in phenomena, not in the individual data points out of which we construct the phenomena. In the strategies for the plausible denial of variations in the boiling-point case, we have seen very concrete illustrations of how a middle-level regularity can be shielded from all the fickle variations found in individual observations. This discussion dovetails very nicely with one of Bogen and Woodward's illustrative examples, the melting point of lead, which is a stable phenomenon despite variations in the thermometer readings in individual trials of the experiments for its determination.59

  58. See Cartwright 1983, Galison 1997, Hacking 1983. I have shown similar looseness in energy measurements in quantum physics; see Chang 1995a.

  59. See Bogen and Woodward 1988; the melting point example is discussed on 308-310. See also Woodward 1989.

  end p.52

  The Case of the Freezing Point

  Before closing the discussion of fixed points, it will be useful to examine briefly the establishment of the other common fixed point, namely the freezing point of water. (This point was often conceived as the melting point of ice, but measuring or regulating the temperature of the interior of a solid block of ice was not an easy task in practice, so generally the thermometer was inserted into the liquid portion of an ice-water mixture in the process of freezing or melting.) Not only is the freezing-point story interesting in its own right but it also provides a useful comparison and contrast to the boiling-point case and contributes to the testing of the more general epistemological insights discussed in earlier sections. As I will discuss more carefully in "The Abstract and the Concrete" in chapter 5, the general insights were occasioned by the consideration of the boiling-point episode, but they of course do not gain much evidential support from that one case. The general ideas have yet to demonstrate their validity, both by further general considerations and by showing their ability to aid the understanding of other concrete cases. For the latter type of test, it makes sense to start with the freezing point, since there would be little hope for the generalities inspired by the boiling point if they did not even apply fruitfully to the other side of the centigrade scale.

  There are some overt parallels between the histories of the boiling point and the freezing point. In both cases, the initial appearance of fixity was controverted by more careful observations, upon which various strategies were applied to defend the desired fixity. In both cases, understanding the effect of dissolved impurities contributed effectively in dispelling the doubts about fixity (this was perhaps an even more important factor for the freezing point than the boiling point). And the fixity of the freezing point was threatened by the phenomenon of supercooling, just as the fixity of the boiling point was threatened by superheating.60 This phenomenon, in which a liquid at a temperature below its "normal" freezing temperature maintains its liquid form, was discovered in water by the early eighteenth century. Supercooling threatened to make a mockery of the freezing of water as a fixed point, since it seemed that one could only say, "pure water always freezes at 0°C, except when it doesn't."

  It is not clear who first noticed the phenomenon of supercooling, but it was most famously reported by Fahrenheit (1724, 23) in one of the articles that he submitted to support his election as a Fellow of the Royal Society of London. De Luc used his airless water (described in the "Superheating and the Mirage of True Ebullition" section) and cooled it down to 14°F (−10°C) without freezing. Supercooling was suspected to happen in mercury in the 1780s, and that gave occasion for important further investigations by Charles Blagden (1748-1820), Cavendish's

  60. "Supercooling" is a modern term, the first instance of its use being dated at 1898 in the Oxford English Dictionary, 2d ed. In the late nineteenth century it was often referred to as "surfusion" (cf. the French term surchauffer for superheating), and in earlier times it was usually described as the "cooling of a liquid below its normal freezing point," without a convenient term to use.

  end p.53

  longtime collaborator and secretary of the Royal Society.61 Research into supercooling continued throughout the nineteenth century. For instance, Dufour (1863) brought small drops of water down to −20°C without freezing, using a very similar technique to the one that had allowed him to superheat water to 178°C as discussed in "Superheating and the Mirage of True Ebullition."

  The theoretical understanding of supercooling, up to the end of the nineteenth century, was even less firm than that of superheating, perhaps because even ordinary freezing was so poorly understood. The most basic clue was provided by Black's concept of latent heat. After water reaches its freezing temperature, a great deal more heat has to be taken away from it in order to turn it into ice; likewise, a lot of heat input is required to melt ice that is already at the melting point. According to Black's data, ice at the freezing point contained only as much heat as would liquid water at 140° below freezing on Fahrenheit's scale (if it could be kept liquid while being cooled down to that temperature).62 In other words, a body of water at 0°C contains a lot more heat than the same amount of ice at 0°C. All of that excess heat has to be taken away if all of the water is to freeze; if just a part of the excess heat is taken away, normally just one part of the water freezes, leaving the rest as liquid at 0°C. But if there is no particular reason for one part of the water to freeze and the rest of it not to freeze, then the water can get stuck in a collective state of indecision (or symmetry, to use a more modern notion). An unstable equilibrium results, in which all of the water remains liquid but at a temperature lower than 0°C, with the heat deficit spread out evenly throughout the liquid. The concept of latent heat thus explained how the state of supercooling could be maintained. However, it did not provide an explanation or prediction as to when supercooling would or would not take place.

  How was the fixity of the freezing point defended, despite the acknowledged (and poorly understood) existence of supercooling? Recall, from "Escape from Superheating," one of the factors that prevented superheating from being a drastic threat to the fixity of the boiling point: although water could be heated to quite extreme temperatures without boiling, it came down to much more reasonable temperatures once it started boiling. A similar phenomenon saved the freezing point. From Fahrenheit onward, researchers on supercooling noted a phenomenon called "shooting." On some stimulus, such as shaking, the supercooled water would suddenly freeze, with ice
crystals shooting out from a catalytic point. Wonderfully for thermometry, the result of shooting was the production of just the right amount of ice (and released latent heat) to bring up the temperature of the whole to the normal freezing point.63 Gernez (1876) proposed that shooting from supercooling

  61. The supercooling of mercury will be discussed again in "Consolidating the Freezing Point of Mercury" in chapter 3. See Blagden 1788, which also reports on De Luc's work on p. 144.

  62. Measurements by others indicated similar latent heat values, Wilcke giving 130°F and Cavendish 150°F. See Cavendish 1783, 313.

  63. Blagden (1788, 134) noted, however: "If from any circumstances … the shooting of the ice proceeds more slowly, the thermometer will often remain below the freezing point even after there is much ice in the liquor; and does not rise rapidly, or to its due height, till some of the ice is formed close to the bulb."

  end p.54

  could actually be used as a more reliable fixed point of temperature than normal freezing.64

  On the causes and preventatives of shooting there were various opinions, considerable disagreement, and a good deal of theoretical uncertainty. Blagden (1788, 145-146) concluded with the recognition that "the subject still remains involved in great obscurity." However, this was of no consequence for thermometry, since in practice shooting could be induced reliably whenever desired (similarly as superheated "bumping" could be prevented at will). Although the effectiveness of mechanical agitation was seriously debated, from early on all were agreed that dropping a small piece of ice into the supercooled water always worked. This last circumstance fits nicely into a more general theoretical view developed much later, particularly by Aitken, which I discussed in "A Dusty Epilogue." In Aitken's view, fixity of temperature was the characteristic of an equilibrium between two different states of water. Therefore the fixed temperature could only be produced reliably when both liquid and solid water were present together in contact with each other. In addition to supercooling, Aitken reported that ice without a "free surface" could be heated up to 180°C without melting.65 At a pragmatic level, the importance of equilibrium had been recognized much earlier. De Luc in 1772 argued that the temperature at which ice melted was not the same as the temperature at which water froze and proposed the temperature of "ice that melts, or water in ice" as the correct conception of the freezing/melting point.66 And Fahrenheit had already defined his second fixed point as the temperature of a water-ice mixture (see Bolton 1900, 70). These formulations specified that there should be both ice and water present in thermal equilibrium, though not on the basis of any general theoretical framework such as Aitken's.