Inventing Temperature Read online

Page 9


  The reason we accept sensation as a prior standard is precisely because it is prior to other standards, not because it has stronger justification than other standards. There is room for the later standard to depart from the prior standard, since the authority of the prior standard is not absolute. But then why should the prior standard be respected at all? In many cases it would be because the prior standard had some recognizable merits shown in its uses, though not foolproof justification. But ultimately it is because we do not have any plausible alternative. As Wittgenstein says, no cognitive activity, not even the act of doubting, can start without first believing something. "Belief" may be an inappropriate term to use here, but it would be safe enough to say that we have to start by accepting and using the most familiar accepted truths. Do I know that the earth is hundreds of years old? Do I know that all human beings have parents? Do I know that I really have two hands? Although those basic propositions lack absolute proof, it is futile to ask for a justification for them. Wittgenstein (1969, 33, §250) notes in his reflections on G. E. Moore's anti-skeptical arguments: "My having two hands is, in normal circumstances, as certain as anything that I could produce in evidence for it." Trusting sensation is the same kind of acceptance.

  50. See, for instance, Mach [1900] 1986, §2, for a discussion of this widely cited case.

  end p.43

  Exactly what kind of relationship is forged between a prior standard and a later standard, if we follow the principle of respect? It is not a simple logical relation of any kind. The later standard is neither deduced from the prior one, nor derived from it by simple induction or generalization. It is not even a relation of strict consistency, since the later standard can contradict the earlier one. The constraint on the later standard is that it should show sufficient agreement with the prior standard. But what does "sufficient" mean? It would be wrong to try to specify a precise and preset degree of agreement that would count as sufficient. Instead, "sufficient" should be understood as an indication of intent, to respect the prior standard as far as it is plausible to do so. All of this is terribly vague. What the vagueness indicates is that the notion of justification is not rich enough to capture in a satisfactory way what is going on in the process of improving standards. In the following section I will attempt to show that the whole matter can be viewed in a more instructive light if we stop looking for a static logical relation of justification, and instead try to identify a dynamic process of knowledge-building.

  The Iterative Improvement of Standards: Constructive Ascent

  In the last section I was engaged in a quest of justification, starting with the accepted fixed points and digging down through the layers of grounds on which we accept their fixity. Now I want to explore the relations between the successive temperature standards from the opposite direction as it were, starting with the primitive world of sensation and tracing the gradual building-up of successive standards. This study will provide a preliminary glimpse of the process of multi-stage iteration, through which scientific knowledge can continue to build on itself. If a thermoscope can correct our sensation of hot and cold, then we have a paradoxical situation in which the derivative standard corrects the prior standard in which it is grounded. At first glance this process seems like self-contradiction, but on more careful reflection it will emerge as self-correction, or more broadly, self-improvement.

  I have argued that the key to the relation between prior and later standards was the principle of respect. But respect only illuminates one aspect of the relation. If we are seeking to create a new standard, not content to rest with the old one, that is because we want to do something that cannot be achieved by means of the old standard. Respect is only the primary constraint, not the driving force. The positive motivation for change is an imperative of progress. Progress can mean any number of things (as I will discuss in more general terms in chapter 5), but when it comes to the improvement of standards there are a few obvious aspects we desire: the consistency of judgments reached by means of the standard under consideration, the precision and confidence with which the judgments can be made, and the scope of the phenomena to which the standard can be applied.

  Progress comes to mean a spiral of self-improvement if it is achieved while observing the principle of respect. Investigations based on the prior standard can result in the creation of a new standard that improves upon the prior standard. Self-improvement is possible only because the principle of respect does not demand that the old standard should determine everything. Liberality in respect creates the breathing space for progress. The process of self-improvement arising from the

  end p.44

  dialectic between respect and progress might be called bootstrapping, but I will not use that term for fear of confusion with other well-known uses of it.51 Instead I will speak of "iteration," especially with the possibility in mind that the process can continue through many stages.

  Iteration is a notion that originates from mathematics, where it is defined as "a problem-solving or computational method in which a succession of approximations, each building on the one preceding, is used to achieve a desired degree of accuracy."52 Iteration, now a staple technique for computational methods of problem-solving, has long been an inspiration for philosophers.53 For instance, Charles Sanders Peirce pulled out an iterative algorithm for calculating the cube root of 2 (see fig. 1.6) when he wanted to illustrate his thesis that good reasoning corrects itself. About such processes he observed: Certain methods of mathematical computation correct themselves, so that if an error be committed, it is only necessary to keep right on, and it will be corrected in the end. … This calls to mind one of the most wonderful features of reasoning and one of the most important philosophemes [sic] in the doctrine of science, of which, however, you will search in vain for any mention in any book I can think of; namely, that reasoning tends to correct itself, and the more so, the more wisely its plan is laid. Nay, it not only corrects its conclusions, it even corrects its premisses. … [W]ere every probable inference less certain than its premisses, science, which piles inference upon inference, often quite deeply, would soon be in a bad way. Every astronomer, however, is familiar with the fact that the catalogue place of a fundamental star, which is the result of elaborate reasoning, is far more accurate than any of the observations from which it was deduced. (Peirce [1898] 1934, 399-400; emphasis added)

  Following the spirit of Peirce's metaphorical leap from mathematical algorithm to reasoning in general, I propose a broadened notion of iteration, which I will call "epistemic iteration" as opposed to mathematical iteration. Epistemic iteration is a process in which successive stages of knowledge, each building on the preceding one, are created in order to enhance the achievement of certain epistemic goals. It differs crucially from mathematical iteration in that the latter is used to approach the correct answer that is known, or at least in principle knowable, by other means. In epistemic iteration that is not so clearly the case.

  51. The most prominent example is Glymour 1980, in which bootstrapping indicates a particular mode of theory testing, rather than a more substantial process of knowledge creation.

  52. Random House Webster's College Dictionary (New York: Random House, 2000). Similarly, the 2d edition of the Oxford English Dictionary gives the following definition: "Math. The repetition of an operation upon its product … esp. the repeated application of a formula devised to provide a closer approximation to the solution of a given equation when an approximate solution is substituted in the formula, so that a series of successively closer approximations may be obtained."

  53. For an introduction to modern computational methods of iteration, see Press et al. 1988, 49-51, 256-258, etc. See Laudan 1973 for further discussion of the history of ideas about self-correction and the place of iteration in that history.

  end p.45

  Figure 1.6. The iterative algorithm for computing the cube root of 2, illustrated by Charles Sanders Peirce ([1898] 1934, 399). Reprinted by permission of The Belknap Pre
ss of Harvard University Press, Copyright © 1934, 1962 by the President and Fellows of Harvard College.

  Another difference to note is that a given process of mathematical iteration relies on a single algorithm to produce all successive approximations from a given initial conjecture, while such a set mechanism is not always available in a process of epistemic iteration. Rather, epistemic iteration is most likely a process of creative evolution; in each step, the later stage is based on the earlier stage, but cannot be deduced from it in any straightforward sense. Each link is based on the principle of respect and the imperative of progress, and the whole chain exhibits innovative progress within a continuous tradition.

  Certain realists would probably insist on having truth as the designated goal of a process like epistemic iteration, but I would prefer to allow a multiplicity of epistemic goals, at least to begin with. There are very few actual cases in which we could be confident that we are approaching "the truth" by epistemic iteration. Other objectives are easier to achieve, and the degree of their achievement is easier to assess. I will discuss that matter in more detail in chapter 5. Meanwhile, for my present purposes it is sufficient to grant that certain values aside from truth can provide the guiding objectives and criteria for iterative progress, whether or not those values contribute ultimately to the achievement of truth.

  end p.46

  Table 1.3. Stages in the iterative development of thermometric standards Period and relevant scientists

  Standard

  From earliest times

  Stage 1: Bodily sensation

  Early seventeenth century: Galileo, etc.

  Stage 2: Thermoscopes using the expansion of fluids

  Late seventeenth to mid-eighteenth century: Eschinardi, Renaldini, Celsius, De Luc, etc.

  Stage 3a: Numerical thermometers based on freezing and boiling of water as fixed points

  Late eighteenth century: Cavendish, The Royal Society committee, etc.

  Stage 3b: Numerical thermometers as above, with the boiling point replaced by the steam point

  Epistemic iteration will be a recurring theme in later chapters and will be discussed in its full generality in chapter 5. For now, I will only discuss the iteration of standards in early thermometry. Table 1.3 summarizes the stages in the iterative development of temperature standards that we have examined in this chapter.

  Stage 1. The first stage in our iterative chain of temperature standards was the bodily sensation of hot and cold. The basic validity of sensation has to be assumed at the outset because we have no other plausible starting place for gaining empirical knowledge. This does not mean that "uneducated" perception is free of theories or assumptions; as numerous developmental and cognitive psychologists since Jean Piaget have stressed, everyday perception is a complex affair that is only learned gradually. Still, it is our starting point in the building of scientific knowledge. After Edmund Husserl (1970, 110-111, §28), we might say that we take the "life world" for granted in the process of constructing the "scientific world."

  Stage 2. Building on the commonly observed correlation between sensations of hot and cold and changes in the volume of fluids, the next standard was created: thermoscopes. Thus, thermoscopes were initially grounded in sensations, but they improved the quality of observations by allowing a more assured and more consistent ordering of a larger range of phenomena by temperature. The coherence and usefulness of thermoscope readings constituted an independent source of validation for thermoscopes, in addition to their initial grounding in sensations.

  Stage 3a. Once thermoscopes were established, they allowed sensible judgments about which phenomena were sufficiently constant in temperature to serve as fixed points. With fixed points and the division of the interval between them, it became possible to construct a numerical scale of temperature, which then constituted the next standard in the iterative chain. Numerical thermometers, when successfully constructed (see "The Achievement of Observability, by Stages" in chapter 2 for a further discussion of the meaning of "success" here), constituted an improvement

  end p.47

  upon thermoscopes because they allowed a true quantification of temperature. By means of numerical thermometers, meaningful calculations involving temperature and heat could be made and thermometric observations became possible subjects for mathematical theorizing. Where such theorizing was successful, that constituted another source of validation for the new numerical thermometric standard.54

  Stage 3b. The boiling point was not as fixed as it had initially appeared (and nor was the freezing point, as I will discuss in "The Case of the Freezing Point"). There were reasonably successful strategies for stabilizing the boiling point, as I will discuss further in "The Defense of Fixity," but one could also try to come up with better fixed points. In fact scientists did find one to replace the boiling point: the steam point, namely the temperature of boiled-off steam, or more precisely, the temperature at which the pressure of saturated steam is equal to the standard atmospheric pressure. The new numerical thermometer using the "steam point" as the upper fixed point constituted an improved temperature standard. Aside from being more fixed than the boiling point, the steam point had the advantage of further theoretical support, both in the temperature-pressure law of saturated steam and in the pressure-balance theory of boiling (see "The Understanding of Boiling" in the narrative). The relation between this new thermometer and the older one employing the boiling point is interesting: although they appeared in succession, they were not successive iterative stages. Rather, they were competing iterative improvements on the thermoscopic standard (stage 2). This can be seen more clearly if we recognize that the steam-point scale (stage 3b) could have been obtained without there having been the boiling-point scale (stage 3a).

  The Defense of Fixity: Plausible Denial and Serendipitous Robustness

  In the last two sections I have discussed how it was possible at all to assess and establish fixed points. Now I want to address the question of how fixed points are actually created, going back to the case of the boiling point in particular. The popular early judgment reached by means of thermoscopes was that the boiling temperature of water was fixed. That turned out to be quite erroneous, as I have

  54. A few qualifications should be made here about the actual historical development of stage 3a, since it was not as neat and tidy as just summarized. My discussion has focused only on numerical thermometers using two fixed points, but there were a number of thermometers using only one fixed point (as explained in note 3) and occasional ones using more than two. The basic philosophical points about fixed points, however, do not depend on the number of fixed points used. Between stage 2 and stage 3a there was a rather long period in which various proposed fixed points were in contention, as already noted. There also seem to have been some attempts to establish fixed points without careful thermoscopic studies; these amounted to trying to jump directly from stage 1 to stage 3 and were generally not successful.

  end p.48

  shown in "The Vexatious Variations of the Boiling Point" and "Superheating and the Mirage of True Ebullition." The interesting irony is that the fixity of the boiling point was most clearly denied by the precise numerical thermometers that were constructed on the very assumption of the fixity of the boiling point. Quite inconveniently, the world turned out to be much messier than scientists would have liked, and they had no choice but to settle on the most plausible candidates for fixed points, and then to defend their fixity as much as possible. If defensible fixed points do not occur naturally, they must be manufactured. I do not, of course, mean that we should simply pretend that certain points are fixed when they are not. What we need to do is find, or create, clearly identifiable material circumstances under which the fixity does hold.55 If that can be done, we can deny the variations plausibly. That is to say, when we say things like "Water always boils at 100°C," that would still be false, but it would be a plausible denial of facts because it would become correct enough once we specify the exceptional
circumstances in which it is not true.

  There are various epistemic strategies in this defense of fixity. A few commonsensical ones can be gleaned immediately from the boiling-point episode discussed in the narrative.

  1.

  If causes of variation can be eliminated easily, eliminate them. From early on it was known that many solid impurities dissolved in water raised its boiling temperature; the sensible remedy there was to use only distilled water for fixing the boiling point. Another well-known cause of variation in the boiling temperature was the variation in atmospheric pressure; one remedy there was to agree on a standard pressure. There were also a host of causes of variation that were not so clearly understood theoretically. However, at the empirical level what to do to eliminate variation was clear: use metal containers instead of glass; if glass is used, do not clean it too drastically, especially not with sulphuric acid; do not purge the dissolved air out of the water; and so on.