Inventing Temperature Read online

Page 17


  Comparability and the Ontological Principle of Single Value

  Having sharpened our notion of observation and observability, we are now ready to consider again how Regnault solved the greatest problem standing in the way of making numerical temperature observable: the problem of nomic measurement. The solution, as I have already noted, was the criterion of comparability, but now we will be able to reach a deeper understanding of the nature of that solution. Recall the formulation of the problem given in "The Problem of Nomic Measurement." We have a theoretical assumption forming the basis of a measurement technique, in the form of a law that expresses the quantity to be measured, X, as a function of another quantity, Y, which is directly observable: X = f(Y). Then we have a problem of circularity in justifying the form of that function: f cannot be determined without knowing the X-values, but X cannot be determined without knowing f. In the problem of thermometric fluids, the unknown quantity X is temperature, and the directly observable quantity Y is the volume of the thermometric fluid.

  When employing a thermoscope (the "stage 2" standard), the only relation known or assumed between X and Y was that they vary in the same direction; in other words, that the function f is monotonic. The challenge of developing a numerical thermometer (the "stage 3" standard) was solved partly by finding suitable fixed points, but fixed points only allowed the stipulation of numerical values of X at isolated points. There still remained the task of finding the form of f, so that X could be deduced from Y in the whole range of values, not just at the fixed points. The usual practice in making numerical thermometers was to make a scale by dividing up the interval between the fixed points equally, which amounted to conjecturing that f would be a linear function. Each thermometric fluid represented

  end p.89

  the hypothesis that f was a linear function for that substance. (And for other methods of graduating thermometers, there would have been other corresponding hypotheses.) The problem of nomic measurement consists in the difficulty of finding a sufficiently refined standard to be used for testing those hypotheses.

  Regnault succeeded first of all because he recognized the starkness of the epistemic situation more clearly than any of his predecessors: the stage-2 standard was not going to settle the choice of stage-3 standards, and no other reliable standards were available. Seeking the aid of theories was also futile: theories verifiable by the stage-2 standard were useless because they provided no quantitative precision; trying to use theories requiring verification by a stage-3 standard created circularities. The conclusion was that each proposed stage-3 standard had to be judged by its own merits. Comparability was the epistemic virtue that Regnault chose as the criterion for that judgment.

  But why exactly is comparability a virtue? The requirement of comparability only amounts to a demand for self-consistency. It is not a matter of logical consistency, but what we might call physical consistency. This demand is based on what I have elsewhere called the principle of single value (or, single-valuedness): a real physical property can have no more than one definite value in a given situation.60 As I said in "The Problem of Nomic Measurement," most scientists involved in the debates on thermometric fluids were realists about temperature, in the sense that they believed it to be that sort of a real

  60. See Chang 2001a. This principle is reminiscent of one of Brian Ellis's requirements for a scale of measurement. Ellis (1968, 39) criticizes as insufficient S. S. Stevens's definition of measurement: "[M]easurement [is] the assignment of numerals to objects or events according to rule—any rule." Quite sensibly, Ellis (41) stipulates that we have a scale of measurement only if we have "a rule for making numerical assignments" that is "determinative in the sense that, provided sufficient care is exercised the same numerals (or range of numerals) would always be assigned to the same things under the same conditions"; to this he attaches one additional condition, which is that the rule should be "non-degenerate" (to exclude non-informative assignments such as "assign the number 2 to everything").

  end p.90

  physical quantity. Therefore they did not object at all to Regnault's application of the principle of single value to temperature.

  It is easy enough to see how this worked out in practical terms, but there remains a philosophical question. What kind of criterion is the principle of single value, and what compels our assent to it? It is not reducible to the logical principle of noncontradiction. It would be nonsensical to say that a given body of gas has a uniform temperature of 15°C and 35°C at once, but that nonsense still falls short of the logical contradiction of saying that its temperature is both 15°C and not 15°C. For an object to have two temperature values at once is absurd because of the physical nature of temperature, not because of logic. Contrast the situation with some nonphysical properties, where one object possessing multiple values in a given situation would not be such an absurdity: a person can have two names, and purely mathematical functions can be multiple-valued. We can imagine a fantasy object that can exist in two places at once, but when it comes to actual physical objects, even quantum mechanics only goes so far as saying that a particle can have non-zero probabilities of detection in multiple positions. In mathematical solutions of physical problems we often obtain multiple values (for the simplest example, consider a physical quantity whose value is given by the equation x2 = 1), but we select just one of the solutions by considering the particular physical circumstances (if x in the example is, say, the kinetic energy of a classical particle, then it is easy enough to rule out −1 as a possible value). In short, it is not logic but our basic conception of the physical world that generates our commitment to the principle of single value.

  On the other hand, it is also clear that the principle of single value is not an empirical hypothesis. If someone would try to support the principle of single value by going around with a measuring instrument and showing that he or she always obtains a single value of a certain quantity at a given time, we would regard it as a waste of time. Worse yet, if someone would try to refute the principle by pointing to alleged observations of multiple-valued quantities (e.g. that the uniform temperature of this cup of water at this instant is 5° and 10°), our reaction would be total incomprehension. We would have to say that these observations are "not even wrong," and we would feel compelled to engage in a metaphysical discourse to persuade this person that he is not making any sense. Any reports of observations that violate the principle of single value will be rejected as unintelligible; more likely, such absurd representations or interpretations of experience would not even occur to us in the first place. Unlike even the most general empirical statements, this principle is utterly untestable by observation.

  The principle of single value is a prime example of what I have called ontological principles, whose justification is neither by logic nor by experience (Chang 2001a, 11-17). Ontological principles are those assumptions that are commonly regarded as essential features of reality within an epistemic community, which form the basis of intelligibility in any account of reality. The denial of an ontological principle strikes one as more nonsensical than false. But if ontological principles are neither logically provable nor empirically testable, how can we go about establishing their correctness? What would be the grounds of their validity? Ontological principles may be akin to Poincaré's conventions, though I would be hesitant to allow all the things he classified as conventions into the category of ontological principles. Perhaps the closest parallel is the Kantian synthetic a priori; ontological principles are always valid because we are not capable of accepting anything that violates them as an element of reality. However, there is one significant difference between my ontological principles and Kant's synthetic a priori, which is that I do not believe we can claim absolute, universal, and eternal certainty about the correctness of the ontological principles that we hold. It is possible that our ontological principles are false.

  This last admission opens up a major challenge: how can we overcome the uncertainties in our ontolog
ical principles? Individuals or epistemic communities may be so steeped in some false ontological beliefs that they would be prejudiced against any theories or experimental results that contravened those beliefs. Given that there is notoriously little agreement in ontological debates, is it possible to prevent the use of ontological principles from degenerating into a relativist morass, each individual or epistemic community freely judging proposed systems of knowledge according to their fickle and speculative ontological "principles"? Is it possible to resolve the disagreements at all, in the absence of any obvious criteria of judgment? In short, since we do not have a guarantee of arriving at anything approaching objective certainty in ontology, would we not be better off giving it up altogether?

  end p.91

  Perhaps—except that by the same lights we should also have to give up the empiricist enterprise of making observations and testing theories on the basis of observations. As already stressed in "The Validation of Standards" in chapter 1, it has been philosophical common sense for centuries that our senses do not give us certainty about anything other than their own impressions. There is no guarantee that human sense organs have any particular aptitude for registering features of the world as they really are. Even if we give up on attaining objectivity in that robust sense and merely aim at intersubjectivity, there are still serious problems. Observations made by different observers differ, and there are no obvious and fail-safe methods for judging whose observations are right. And the same evidence can be interpreted to bear on theories in different ways. All the same, we do not give up the practice of relying on observations as a major criterion for judging other parts of our systems of knowledge. Instead we do our best to improve our observations. Similarly, I believe that we should do our best to improve our ontological principles, rather than giving up the practice of specifying them and using them in evaluating systems of knowledge. If fallibilist empiricism is allowed to roam free, there is no justice in outlawing ontology because it confesses to be fallible.

  Thus, we have arrived at a rather unexpected result. When we consider Regnault's work carefully, what initially seems like the purest possible piece of empiricism turns out to be crucially based on an ontological principle, which can only have a metaphysical justification. What Regnault would have said about that, I am not certain. A difference must be noted, however, between the compulsion to follow an untestable ontological principle and the complacency of relying on testable but untested empirical hypothesis. The former indicates a fundamental limitation of strict empiricism; the latter has no justification, except for practical expediency in certain circumstances. The adherence to an ontological principle satisfies a clear goal, that of intelligibility and understanding. It might be imagined that comparability was wanted strictly for practical reasons. However, I believe that we often want consistency for its own sake, or more precisely, for the sake of intelligibility. It is doubtful that differences of a fraction of a degree or even a few degrees in temperature readings around 300°C would have made any appreciable practical difference in any applications in Regnault's time. For practical purposes the mercury thermometer could have been, and were, used despite its lack of comparability when judged by Regnault's exacting standards. It is not practicality, but metaphysics (or perhaps esthetics) that compelled him to insist that even a slight degree of difference in comparability was a crucial consideration in the choice of thermometric fluids.

  Minimalism against Duhemian Holism

  Another important way of appreciating Regnault's achievement is to view it as a solution to the problem of "holism" in theory testing, most commonly located in the work of the French physicist-philosopher Pierre Duhem, who summed it up as follows: "An experiment in physics can never condemn an isolated hypothesis but only a whole theoretical group" ([1906] 1962, sec. 2.6.2, 183). That formidable problem is a general one, but here I will only treat it in the particular context of

  end p.92

  thermometry. Before I can give an analysis of Regnault's work as a solution to this problem of Duhemian holism, some general considerations regarding hypothesis testing are necessary. Take the standard empiricist notion that a hypothesis is tested by comparing its observational consequences with results of actual observations. This is essentially the basic idea of the "hypothetico-deductive" view of theory testing, but I would like to conceptualize it in a slightly different way. What happens in the process just mentioned is the determination of a quantity in two different ways: by deduction from a hypothesis and by observation.

  This reconceptualization of the standard notion of theory testing allows us to see it as a type within a broader category, which I will call "attempted overdetermination," or simply "overdetermination": a method of hypothesis testing in which one makes multiple determinations of a certain quantity, on the basis of a certain set of assumptions. If the multiple determinations agree with each other, that tends to argue for the correctness or usefulness of the set of assumptions used. If there is a disagreement, that tends to argue against the set of assumptions. Overdetermination is a test of physical consistency, based on the principle of single value (discussed in the previous section), which maintains that a real physical quantity cannot have more than one value in a given situation. Overdetermination does not have to be a comparison between a theoretical determination and an empirical determination. It can also be a comparison between two (or more) theoretical determinations or two observational ones. All that matters is that some quantity is determined more than once, though for any testing that we want to call "empirical," at least one of the determinations should be based on observation.61

  Let us now see how this notion of testing by overdetermination applies to the test of thermometers. In the search for the "real" scale of temperature, there was a basic nonobservational hypothesis, of the following form: there is an objectively existing property called temperature, and its values are correctly given by thermometer X (or, thermometer-type X). More particularly, in a typical situation, there was a nonobservational hypothesis that a given thermometric fluid expanded uniformly (or, linearly) with temperature.

  De Luc's method of mixtures can be understood as a test by overdetermination, as follows: determine the temperature of the mixture first by calculation, and then by measuring it with the thermometer under consideration. Are the results the same? Clearly not for thermometers of spirit or the other liquids, but much better for mercury. That is to say, the attempted overdetermination clearly failed for the set of hypotheses that included the correctness of the spirit thermometer, and not so severely for the other set that included the correctness of the mercury thermometer instead. That was a nice result, but De Luc's test was seriously weakened by the holism problem because he had to use other nonobservational hypotheses in addition to the main hypothesis he wanted to test. The determination of the final

  61. It is in the end pointless to insist on having one theoretical and one observational determination each when we have learned that the line between the theoretical and the observational is hardly tenable. This is, as van Fraassen stressed, different from saying that the observable-unobservable distinction is not cogent. "Theoretical" is not synonymous with "unobservable."

  end p.93

  temperature by calculation could not be made without relying on at least two further nonobservational hypotheses: the conservation of heat and the constancy of the specific heat of water. Anyone wishing to defend the spirit thermometer could have "redirected the falsification" at one of those auxiliary assumptions. No one defended the spirit thermometer in that way to my knowledge, but people did point out De Luc's use of auxiliary hypotheses in order to counter his positive argument for mercury, as discussed in "Caloric Theories against the Method of Mixtures." Dalton was one of those who argued that De Luc's successful overdetermination in the mercury case was spurious and accidental: the specific heat of water was not constant; mercury did not expand linearly; according to Dalton, those two errors must have cancelled each other out.

&nbs
p; How does Regnault look? The beauty of Regnault's work on thermometry lies in the fact that he managed to arrange overdetermination without recourse to any significant additional hypotheses concerning heat and temperature. Regnault realized that there was already enough in the basic hypothesis itself to support overdetermination. A given temperature could be overdetermined by measuring it with different thermometers of the same type; that overdetermination did not need to involve any uncertain extra assumptions. Regnault's work exemplifies what I will call the strategy of "minimalist overdetermination" (or "minimalism" for short). The heart of minimalism is the removal of all possible extraneous (or auxiliary) nonobservational hypotheses. This is not a positivist aspiration to remove all nonobservational hypotheses in general. Rather, minimalism is a realist strategy that builds or isolates a compact system of nonobservational hypotheses that can be tested clearly. The art in the practice of minimalism lies in the ability to contrive overdetermined situations on the basis of as little as possible; that is what Regnault was so methodically good at.

  Minimalism can ameliorate the holism problem, regardless of whether the outcome of the test is positive or negative. Generally the failure of overdetermination becomes a more powerful indictment of the targeted hypothesis when there are fewer other assumptions that could be blamed. If auxiliary hypotheses interfere with the logic of falsification, one solution is to get rid of them altogether, rather than agonizing about which ones should be trusted over which. This Regnault managed beautifully. When there was a failure of overdetermination in Regnault's experiment, the blame could be placed squarely on the thermometer being tested. If that conclusion was to be avoided, there were only two options available: either give up the notion of single-valued temperature altogether or resort to extraordinary skeptical moves such as the questioning of the experimenter's ability to read simple gauges correctly. No one pursued either of these two options, so Regnault's condemnation of the mercury thermometer stood unchallenged.