Until the end of time, p.40

Until the End of Time, page 40

 

Until the End of Time
Select Voice:
Brian (uk)
Emma (uk)  
Amy (uk)
Eric (us)
Ivy (us)
Joey (us)
Salli (us)  
Justin (us)
Jennifer (us)  
Kimberly (us)  
Kendra (us)
Russell (au)
Nicole (au)



Larger Font   Reset Font Size   Smaller Font  

  14. For the mathematically informed reader, there is a key technical assumption underlying this discussion (as well as most treatments of statistical mechanics in textbooks and in the research literature). Given any macrostate, there are compatible microstates that will evolve toward lower-entropy configurations. For example, consider the time-reversed version of any unfolding that yielded a given microstate starting from an earlier lower-entropy configuration. Such a “time-reversed” microstate would evolve toward lower entropy. Generally, we categorize such microstates as “rare,” or “highly tuned.” Mathematically, such categorization requires the specification of a measure on the space of configurations. In familiar situations, using the uniform measure on such a space does indeed render entropy-decreasing initial conditions “rare”—that is, of small measure. However, according to a measure that was chosen to peak around such entropy-decreasing initial configurations, they would, by design, not be rare. As far as we know, the choice of measure is an empirical one; for the kinds of systems we encounter in everyday life, the uniform measure yields predictions that agree with observations, and so is the measure we invoke. But it is important to note that the choice of measure is justified by experiment and observation. When we consider exotic situations (such as the early universe) for which we lack analogous data leading us to a particular choice of measure, we need to acknowledge that our intuitions about “rare” or “generic” do not have the same empirical basis.

  15. There are a few relevant points, glossed over in this paragraph, that affect the meaning of a “maximum entropy” state when applied to the universe. First, in this chapter we are not taking into consideration the role of gravity. In chapter 3, we will. And as we will see, gravity has a profound impact on the nature of high-entropy particle configurations. In fact, while it won’t be our focus, in a given finite volume of space the maximum entropy configuration is a black hole—an object deeply dependent on gravity—that completely fills the spatial volume (for details, see, for example, my book The Fabric of the Cosmos, chapter 6 and chapter 16). Second, if we consider arbitrarily large regions of space—even infinitely large—the highest entropy configurations of a given amount of matter and energy are those in which the constituent particles (matter and/or radiation) are uniformly distributed over an ever-larger volume. Indeed, black holes, as we will discuss in chapter 10, ultimately evaporate (through a process discovered by Stephen Hawking), yielding higher-entropy configurations in which particles are increasingly spread out. Third, for the purpose of this section, the only fact we need is that the entropy currently present in any given volume of space is not at its maximum value. If that volume contained, say, the room you are now inhabiting, entropy would increase if all the particles making up you, your furniture, and any other of the room’s material structures were to collapse into a small black hole, which would subsequently evaporate yielding particles that would spread through an even larger volume of space. The very existence of interesting material structures—stars, planets, life, and so on—therefore implies that entropy is lower than what it potentially could be. It is such special, comparatively low-entropy configurations that call out for an explanation of how they arose. In the next chapter, we will take up this challenge.

  16. For the particularly diligent reader, there is one additional detail worth spelling out. When the steam pushes on the piston, it expends some of the energy it absorbed from the fuel, but in the process the steam does not relinquish any of its entropy to the piston (assuming that the piston has the same temperature as the steam). After all, whether the piston is here or, having been pushed, is a short distance from here has no impact on its internal order or disorder; its entropy is unchanged. And with no entropy transferred to the piston, the entropy remains fully within the steam itself. This means that as the piston is reset to its original position, ready for the next thrust, the steam must somehow expel all the excess entropy it is harboring. This is accomplished, as emphasized in the chapter, by the steam engine expelling heat to its surroundings.

  17. Bertrand Russell, Why I Am Not a Christian (New York: Simon & Schuster, 1957), 107.

  Chapter 3: Origins and Entropy

  1. Georges Lemaȋtre, “Recontres avec Einstein,” Revue des questions scientifiques 129 (1958): 129–32.

  2. The full story of Einstein’s conversion to an expanding universe involved two factors. First, Arthur Eddington showed mathematically that Einstein’s earlier proposal of a static universe suffered from a technical flaw: The solution was unstable, meaning that if the expanse of space were nudged to expand slightly, then it would continue expanding; if nudged to contract slightly, it would continue contracting. Second, the observational case, as discussed in this chapter, made it increasingly clear that space is not static. The combination of both realizations convinced Einstein to drop the notion of a static universe (although some have argued that the theoretical considerations may have had the most significant influence). For details of this history, see Harry Nussbaumer, “Einstein’s conversion from his static to an expanding universe,” European Physics Journal—History 39 (2014): 37–62.

  3. Alan H. Guth, “Inflationary universe: A possible solution to the horizon and flatness problems,” Physical Review D 23 (1981): 347. The technical term for the “cosmic fuel” is a scalar field. Unlike the more familiar electric and magnetic fields that provide a vector at each location in space (the magnitude and the direction of the electric or magnetic field at the location), a scalar field provides only a single number at each location in space (numbers from which the field’s energy and pressure can be determined). Note that Guth’s paper, and many subsequent treatments, emphasize the role of inflation in addressing a collection of cosmological issues that had previously stymied researchers—the monopole problem, the horizon problem, and the flatness problem being the most prominent. For an accessible and illuminating discussion of these issues, see Alan Guth, The Inflationary Universe (New York: Basic Books, 1998). Following Guth, I like to motivate inflation by raising the more intuitive problem of identifying the outward push that drove the big bang’s spatial expansion.

  4. The cooling I refer to takes place after the inflationary burst has concluded and the universe has entered a phase of less rapid but still significant spatial expansion. For simplicity, I have left out some intermediate steps in the cosmological unfolding. The early universe cooled because much of the energy it contained was carried by electromagnetic waves, and such waves stretch as space expands. This elongation of the electromagnetic waves—the so-called redshifting of the radiation—decreases their energy and lowers their overall temperature. Note, though, that even though the temperature is cooling, overall entropy is increasing due to the expanding volume of space.

  5. There is a minority perspective that does attribute the fog to an inherent quantum limitation on the precision of measurements and not to a fundamentally blurry reality. In this approach—usually called “Bohmian mechanics,” after physicist David Bohm, but sometimes referred to as the “de Broglie–Bohm theory,” including attribution to Nobel laureate Louis de Broglie—particles retain sharp and definite trajectories. The trajectories are different from those predicted by classical physics (there is an additional quantum force that acts on particles as they move), but to use the language in the chapter, such trajectories could be drawn with a sharp quill. The uncertainty and fuzziness of the more traditional formulation of quantum mechanics shows up as statistical uncertainty regarding the initial conditions of any given particle. The difference between the two perspectives, while essential to the picture of reality each theory paints, has virtually no impact on quantitative predictions.

  6. Inflationary cosmology is a framework of theories—as opposed to a specific theory—based on the premise that during an early phase of its development the universe underwent a brief period of rapid accelerated expansion. The precise manner in which this phase arose and the precise details of its unfolding vary from one mathematical formulation to another. The simplest versions are in tension with ever-more-precise observational data, which has shifted focus to somewhat more complex versions of the inflationary theory. Detractors argue that the more complex versions are less convincing and that, moreover, these versions demonstrate that the inflationary paradigm is too flexible for data to ever rule it out. Proponents argue that all we are witnessing is the natural progression of science: we continually adjust our theories to bring them in line with the most precise information provided by observational measurements and mathematical concerns. More generally, and in more technical terms, a statement widely embraced by cosmologists is that the universe experienced a phase during which the size of the comoving horizon decreased. What is less clear is whether that phase is correctly described by inflationary cosmology, in which the dynamics is driven by the uniform energy suffusing space supplied by a scalar field (see note 3 of this chapter), as I have described, or whether such a phase may have arisen through a different mechanism (such as bouncing cosmologies, brane inflation, colliding brane worlds, variable speed of light theories, among others that physicists have proposed). In chapter 10, we will briefly discuss the possibility of a bouncing cosmology, as developed by Paul Steinhardt, Neil Turok, and various of their collaborators, in which the universe undergoes numerous cycles of cosmological evolution.

  7. For the particularly diligent reader, let me address an important point shadowing the discussion. If all you know about a given physical system is that it has less than the maximum available entropy, then the second law of thermodynamics allows you to draw not one but two conclusions: the most likely evolution of the system toward the future will increase its entropy and the most likely evolution of the system toward the past will also increase its entropy. Such is the burden of time symmetric laws—equations that operate in exactly the same way whether evolving today’s state toward the future or toward the past. The challenge is that the higher-entropy past to which such considerations lead is incompatible with the lower-entropy past attested to by memory and records. (We remember partially melted ice cubes as previously being less melted, thus having lower entropy, not more melted, which would be higher entropy.) More pointedly, a high-entropy past would undermine our confidence in the very laws of physics because such a past would not include the experiments and observations that support the laws themselves. To avoid such a loss of confidence in our understanding we must enforce a low-entropy past. Generally, we do so by introducing a new assumption, one named the past hypothesis by philosopher David Albert, which declares that entropy is anchored at a low value near the big bang and has on average been growing larger ever since. This is the approach we have implicitly taken in this chapter. In chapter 10, we will explicitly analyze the unlikely but conceivable possibility of a low-entropy state emerging from a previous high-entropy configuration. For background and more details, see chapter 7 of The Fabric of the Cosmos.

  8. Mathematical descriptions of entropy make this precise: within any region, there are many more ways for the value of a field to vary (higher here, lower there, much lower way over there, and so on) than there are ways for it to be uniform (same value at every location), and thus the required conditions have low entropy. However, there is a hidden technical assumption that is important to call out. For ease, I will use classical language, but the considerations have a direct translation to quantum physics. In the microworld, no configuration of particles or fields is fundamentally singled out over any other and so we generally deem each to be as likely as any other. But this is an assumption that relies on what philosophers call the principle of indifference. With no a priori evidence distinguishing one microscopic configuration from another, we assign them equal probabilities of being realized. When we shift our focus to the macroworld, the likelihood of one macrostate versus another is then determined by the ratio of the number of microstates that yield each. If there are twice as many microstates that yield a particular macrostate compared to those that yield another, that macrostate is twice as likely to occur.

  Notice, though, that fundamentally, the justification for the principle of indifference must be empirically based. Indeed, common experience confirms the validity of a multitude of uses, implicit though they may be, of the principle of indifference. Take our example of tossed pennies. By assuming that each “microstate” of the coins (a state specified by listing each coin’s disposition, such as coin 1 is heads, coin 2 is tails, coin 3 is tails, and so on) is as likely as any other, we conclude that those “macroscopic” arrangements (states specified only by giving the overall number of heads and tails, not the disposition of individual coins) that can be realized by many microstates are more likely. When we toss the coins, this assumption is empirically confirmed by the rarity of those outcomes that can be realized by only a small number of microstates (such as all heads) and the ubiquity of those that can be realized by a large number of microstates (such as half heads and half tails).

  The relevance to our cosmological discussion is that when we say that a uniform patch of inflaton field is “unlikely,” we are similarly invoking the principle of indifference. We are implicitly assuming that each possible microscopic configuration of the field (the field’s precise value at every location) is as likely as any other so, again, the likelihood of a given macroscopic configuration is proportional to the number of microstates that realize it. However, in contrast to the case of tossed pennies, we have no empirical evidence to support this assumption. The fact that it seems reasonable is based on our experience in the everyday macroscopic world where the principle of indifference is supported by observation. But for the cosmological unfolding, we are privy to only a single run of the experiment. A hard-nosed empirical approach would conclude that however special some configurations may seem based on the principle of indifference, if they lead to the universe we observe, then they are singled out and, as a class, deserve to be called not just “likely” but “definite” (subject to the usual provisional nature of all scientific explanations). Mathematically, such a shift in what we call likely and unlikely is known as a change in the measure over configuration space (see chapter 2, note 14). The initial measure, assigning equal probabilities to each possible configuration, is called a “flat” measure. Observations can thus motivate the introduction of a “non-flat” measure that singles out certain classes of configurations as more probable.

  Physicists are generally unsatisfied with such an approach. Introducing a measure over a space of configurations to ensure the greatest weight is given to those that lead to the world as we know it strikes physicists as “unnatural.” Physicists seek a fundamental, first-principles, mathematical structure that will yield such a measure as output as opposed to including it as part of the input. Important issues are whether this is asking for too much and whether success would simply shift the question one step further back to the implicit assumptions underlying any first-principles approach. These are not nitpicking concerns. Much of the past thirty years of theoretical work in particle physics has been aimed at addressing issues of fine-tuning in our most refined theories (fine-tuning of the Higgs field in the standard model of particle physics; fine-tuning required to address the horizon and flatness problems in standard big bang cosmology). To be sure, such research has led to profound insights into both particle physics and cosmology, but might there come a point when we simply have to accept certain features of the world as given, without a deeper explanation? I like to think that the answer is no, as do a great many of my colleagues. But there is no guarantee that this will be the case.

  9. Andrei Linde, personal communication, July 15, 2019. Linde’s preferred approach is for the inflationary phase to be initiated by a quantum-tunneling event from a realm of all possible geometries and fields, one in which the very concepts of time and temperature may not yet have meaning. By judiciously using aspects of quantum formalism, Linde has argued that the quantum creation of conditions leading to inflationary expansion may well be a common process in the early universe that suffers from no quantum suppression.

  10. It is natural to think that the more powerful a telescope (the larger the dish, the greater the size of the mirror, and so on), the farther the objects are that it will be able to resolve. But there is a limit. If an object is so distant that any light it has emitted since its birth would not have yet had sufficient time to reach us, then regardless of the equipment we use, we will be unable to see it. We say that such objects lie beyond our cosmic horizon, a concept that will play a particularly important part in our discussion of the far future in chapters 9 and 10. In inflationary cosmology, space expands so rapidly that surrounding regions are indeed driven beyond our cosmic horizon.

  11. Based on indirect evidence (the motion of stars and galaxies), there is wide consensus that space is suffused with particles of dark matter—particles that exert a gravitational force but which do not absorb or produce light. But because searches for dark matter particles have so far come up empty-handed, some researchers have suggested alternatives to dark matter in which observations are explained through modifications of the gravitational force law. With the continued failure of numerous ongoing experiments to directly detect particles of dark matter, the alternative theories are attracting increased attention.

 

Add Fast Bookmark
Load Fast Bookmark
Turn Navi On
Turn Navi On
Turn Navi On
Scroll Up
Turn Navi On
Scroll
Turn Navi On
183