Collected short fiction.., p.180
Collected Short Fiction of Greg Egan, page 180
However, it does offer a useful way to get an approximate idea of the effect on relative ageing of going near a massive object. If you start out in a mother ship far from the object, descend in a scout ship to a certain r coordinate, and then return, you will have been struck (at some point) by all of the same wavefronts of light from the stars as the people who stayed behind. But at each r value, Equation (26) implies that the time you would have measured between wavefronts was different by a factor of √(1–2M/r) from that measured on the mother ship. If you spent a large part of your journey hovering at, say, r=3.125M, to a good approximation you’ll have experienced a total elapsed time only 60% as much as the other travellers.
For values of r smaller than 2M, g(∂r,∂r) is negative, meaning that the coordinate vector ∂r has switched from being spacelike to being timelike! Similarly, g(∂t,∂t) is positive, showing that ∂t has become spacelike. The distance 2M in geometric units is known as the Schwarzschild radius for a given mass, and an object that becomes compressed to within its Schwarzschild radius collapses into a black hole. You don’t need to know the “distance to the centre” of such an object: because r is defined in terms of surface area, any non-rotating spherically symmetric object whose surface area is less than or equal to 16πM2 is doomed to become a black hole.
Why is this unavoidable? The fact that ∂r becomes timelike for r less than 2M means that “motion” in the r direction becomes the same as motion in any other timelike direction. We have no choice about the fact that our world lines run into the future, so any object that crosses the onion layer at r=2M, the event horizon, must have a world line that runs in the direction of decreasing r. That’s the definition of “the future” within the event horizon.
Couldn’t you change the direction of the future by changing your velocity? Yes, but not enough. Figure 9 shows the light cones in the spacetime around a black hole, the cones traced out by all the light rays that could be sent, in every possible direction, from various events. Your world line can’t cross the light cones — that would mean travelling faster than light. Once you touch the horizon, the light cones all lead inwards. There is no escape.
In Figure 9, we’ve adopted a new coordinate, t*, to take the place of t. If you follow the grid lines of constant t inwards, they never actually cross the horizon; this makes t useless for labelling any event that lies on the horizon. The new coordinate t* is described in Equation (27) in terms of r and t, and the metric is restated in terms of r, φ, θ and t* in Equation (28).
t* = t + 2M ln |r/2M – 1| (27)
g = –(1–2M/r) dt*⊗dt* + (2M/r) (dr⊗dt* + dt*⊗dr)
+ (1+2M/r) dr⊗dr +
+ r2(cos θ)2 dφ⊗dφ + r2 dθ⊗dθ (28)
Equation (28) describes exactly the same geometry as Equation (23); it just does so in terms of different coordinate lines “painted onto” spacetime. The coordinate t* has been chosen so that incoming light rays appear at 45° in Figure 9; in other words, it makes P=E(–∂r+∂t*) a null vector, as you can easily check by feeding this into Equation (28) to find g(P,P). But every choice of coordinates in curved spacetime is something of a compromise, just like every map projection showing the curved surface of the Earth on flat paper. Though r and t* are drawn at right angles in Figure 9, g(∂r,∂t*) is not zero, so the two directions aren’t really perpendicular.
Though we’ve been taking the Schwarzschild geometry as fixed, unaffected by whatever’s travelling through it, it turns out that it’s only stable outside the event horizon. The presence of even a small amount of matter falling into a black hole would alter the geometry inside the horizon — though this would probably only make the whole experience of being there even more violent. There was once considerable speculation about black holes forming various kinds of wormholes connected to other regions of space, but most relativists now consider this impossible. Everything that crosses the horizon will eventually be torn apart — crushed in two directions and stretched in the third, like the falling cloud of space junk we considered earlier — then the remnants will hit the singularity at r=0. General relativity predicts infinite spacetime curvature there, but the true nature of the singularity will depend on the details of quantum gravity, a discipline still in its infancy.
Further reading: Spacetime Physics by E.F. Taylor and J.A. Wheeler (W.H. Freeman, 1966) is an excellent introduction to special relativity. Gravitation by C.W. Misner, K.S. Thorne and J.A. Wheeler (W.H. Freeman, 1970) is the Bible of general relativity, with a detailed treatment of almost every aspect of the subject. Black Holes and Timewarps: Einstein’s Outrageous Legacy by Kip Thorne (Macmillan, 1995) is a non-mathematical account of general relativity, with a wealth of fascinating biographical and historical detail on the subject’s development.
4: Quantum Mechanics
The Birth of Quantum Mechanics
Complex Numbers
Wave Mechanics
Matrix Mechanics
The Uncertainty Principle
The Action Principle
· · · · ·
The first three articles in this series dealt with special and general relativity, the two great twentieth-century theories of the geometry of spacetime and its relationship with matter and energy. This article will describe the ideas behind a second, simultaneous revolution in physics, one that has had even more profound philosophical and technological consequences: quantum mechanics.
The Birth of Quantum Mechanics
In the second half of the nineteenth century, the Newtonian description of the dynamics of material objects was supplemented by an equally successful theory encompassing all of electrostatics, magnetism and optics. The physicist James Clerk Maxwell brought together a number of disparate laws that had been found to govern quite specific phenomena — such as the force between two motionless electric charges — into a unified description of an electromagnetic field. Light, and most other forms of radiation, were seen to consist of oscillations in this field, or electromagnetic waves. This confirmation of the wave-like nature of light made sense of many long-standing observations, including the phenomenon of interference: if you allow light of a single wavelength to travel through two adjacent narrow slits in a barrier and then recombine on a screen, it produces patterns of dark and light stripes. Since the difference in the time it takes for light waves from the two slits to reach the screen varies from place to place, the waves shift in and out of phase with each other, resulting in varying degrees of constructive interference (where the contributions to the field from both slits point in the same direction), and destructive interference (where they point in opposite directions).
Newtonian dynamics and Maxwellian electrodynamics cut a wide swath through the scientific problems of the day. However, by the end of the nineteenth century a number of serious discrepancies had been found between experimental results and predictions based on these two theories. Newtonian physics was soon to be superseded by special relativity, but the most glaring problems had nothing to do with the motion of objects at high velocities, so the explanation had to lie in another direction entirely.
One of the biggest puzzles involved the spectrum of radiation emitted by hot objects: thermal radiation. This is visible to the naked eye when, for example, the tungsten wire in a light bulb becomes white hot. There’s an idealised class of objects for which this effect is particularly easy to analyse: if an object is a perfect absorber and emitter of electromagnetic waves across the entire spectrum, its thermal radiation should depend solely on its temperature, rather than any idiosyncratic properties of the stuff from which it’s made. Physicists call this a black body, since it should appear black to the naked eye at room temperature. The cavity of a furnace containing nothing but the thermal radiation from its heated walls, with a tiny hole through which radiation can escape to be observed, serves as a good approximation to a black body, both theoretically and experimentally, so black body thermal radiation is also known as cavity radiation.
Maxwell’s theory suggested that the electromagnetic field inside a cavity should be treated as something akin to the three-dimensional equivalent of a piano string being bashed at random, simultaneously vibrating with every possible harmonic. A piano string has evenly spaced harmonics, say 500 Hz, 1000 Hz, 1500 Hz, and so on, which occur when an exact number of half-wavelengths fit the length of the string; the fact that the ends of the string are fixed prevents other frequencies being produced. An electromagnetic field in a three-dimensional cavity is subject to similar boundary conditions, but unlike a piano string the field’s vibrations are free to point in different directions. For example, the field in a cubical cavity might vibrate in such a way that 5, 7 and 4 half-wavelengths span the cavity’s width, breadth and height respectively, because of the way the waves are oriented with respect to the walls. But waves of exactly the same frequency, oriented differently, would fit just as well with 4, 5 and 7 half-wavelengths spanning the same three dimensions.
This makes the situation more complicated than it is for a piano string, but it’s still not too hard to count the modes available to the field: the number of distinct ways in which it can vibrate. Figure 2 isn’t a drawing of a furnace cavity; rather, each point here represents a different mode, with the x, y and z coordinates of the point giving the number of half-wavelengths that fit across the width, breadth, and height of the cavity. The more tightly packed the waves are, the shorter their wavelength and the greater their frequency. The exact frequency of any mode is proportional to its distance from the centre of the diagram — that’s just a matter of Pythagoras’s theorem, and the relationship between frequency and wavelength. So the number of points between the two spherical shells counts the number of modes in the frequency range ΔF. For small values of ΔF, this is proportional to the surface area of the inner sphere, which is proportional to F2.
Because the walls of the cavity are assumed not to favour any particular frequency, every possible mode of the electromagnetic field should have, on average, an equal share of the total energy. The trouble is, the field has an infinite number of modes — at ever higher frequencies, you just keep finding more of them. If the energy from the furnace really was free to spread itself between them, giving them all an equal share, that would be a never ending process, like gas escaping into an infinite vacuum. The average frequency of the radiation in the cavity would wander off towards the ultraviolet and beyond, never stabilising at any fixed spectrum.
The reality is nothing like this, as Figure 3 shows. The observed spectrum reaches a peak at a certain frequency, then tapers off. Clearly, something prevents the energy of the field from being equally distributed amongst all possible modes. But what?
The analysis we’ve given so far assumes that energy can be spread as thinly as you like; as more and more modes share the energy of the field, each one ends up, individually, with a smaller amount. But what if energy couldn’t be endlessly subdivided like this? What if you eventually reached a minimum amount, a “particle” of energy, as indivisible as some particles of matter presumably are? Instead of taking on any value whatsoever, energy would only be found in exact multiples of this amount.
In 1900, Max Planck proposed that this was the case, and called the minimum amount a quantum. Though it might have been simplest to decree a fixed amount of energy as the size of one quantum, like the fixed mass of an electron, that wouldn’t have solved the cavity radiation problem: with an infinite number of modes available, the finite number of quanta would still have been free to “escape” to ever higher frequencies. The only way to prevent this was to propose that higher frequency modes required a greater minimum energy than lower frequency modes, raising a series of ever higher hurdles to counteract the tendency for the energy to spread. Planck found that making the energy of one quantum proportional to the frequency of the electromagnetic wave, as in Equation (1), would yield a spectrum precisely in agreement with observation, if the constant of proportionality was chosen correctly. This value, now known as Planck’s constant, is referred to by the letter h, and has a value of 6.625 x 10-34 Joules per Hz.
E = h F (1)
You might be wondering how Equation (1) dictates the nice tapered curve in Figure 3. What’s to stop all the energy in the furnace from going into a single, super-high-frequency quantum, making the spectrum an isolated peak way off to the right of the graph? The same thing that stops all the energy in the Earth’s atmosphere from ending up concentrated in a couple of atoms: it’s just not very likely. Of all the possible ways a certain total amount of energy can be distributed between billions of possible modes of cavity radiation, the vast majority look like the curve in Figure 3.
Over the first three decades of the twentieth century, many other experiments confirmed the quantisation of light, and led independently to the same value for Planck’s constant. One famous example is the photoelectric effect. When ultraviolet light is shone on a metal plate in a vacuum tube it blasts electrons off the surface of the metal. The energy of the individual electrons released this way (as opposed to the total energy they possess en masse) turns out to be completely independent of the intensity of the light shone on the plate, and can only be increased by using light of a greater frequency. This makes sense if the electrons are absorbing individual quanta, rather than gaining energy from the electromagnetic field as a whole. More intense light of a given frequency contains more quanta of the same energy, and can blast more electrons off the plate — but only raising the frequency of the light, and hence the energy of the quanta, can increase the energy of each individual electron.
Quanta of light, which came to be known as photons, were shown again and again to behave like localised, indivisible particles. But there was no denying the fact that light also behaved like a wave, exhibiting interference effects. Neither aspect could be ignored, but it was not at all clear how to synthesise the two into a coherent new description of electromagnetism.
In parallel with these revelations about light, physicists were grappling with the problem of the structure of atoms. Electrons had been discovered in 1897, and in 1911 Ernest Rutherford had found strong experimental evidence for the theory, first proposed by Hantaro Nagaoka, that atoms consisted of electrons orbiting a positively charged nucleus. The puzzle here was that charged particles moving in a circle emit electromagnetic waves, so the electron should have radiated away all its energy and plunged into the nucleus. Not even Planck’s quantised photons could rule this out.
In 1913, Neils Bohr proposed that the energy of the electrons themselves was quantised, and the existence of a minimum allowed energy kept them from falling into the nucleus. Bohr came up with a formula for the energy levels of the single electron in a hydrogen atom, constructed in order to agree with the observed spectrum of light emitted and absorbed by hydrogen. This spectrum consisted of a discrete set of sharply defined frequencies, which could now be interpreted as the frequencies of photons whose energies matched the differences in energy between the allowed states of the electron. An electron could only move to a higher energy level by absorbing a photon that provided exactly the right amount of energy, and it could only drop back to a lower level by emitting a photon that carried the energy away again. This was by far the most successful model of atomic structure to date, but Bohr’s formula was even more mysterious than Planck’s. Why were only certain energy levels available to the electron?
The first hint at an answer came from the suggestion by Louis de Broglie in 1924 that matter, as well as radiation, might behave like both a wave and a particle. This was confirmed spectacularly a few years later, in experiments showing that electrons fired at a crystal were reflected back most often in certain directions: those in which a wave that scattered off the regularly spaced atoms of the crystal would undergo constructive interference. Since then, interference effects have been demonstrated for all kinds of particles, including entire atoms.
To examine de Broglie’s idea more closely, we need to ask what the wavelength and frequency of the “matter wave” associated with a particle should be. One reasonable starting point is the relationship that worked so successfully for Planck with photons: E=h F. Since F is the frequency of the wave (the number of oscillations per second), the period of the wave, the time each oscillation takes, is:
T = 1/F
= h/E (2)
Since the wave for a photon is moving forward through space at the speed of light, c, each cycle is spread out over one wavelength:
L = c T
= c h/E
Throughout these articles we’ve been using units where c=1, but it’s worth leaving the c in here for a moment, and stating the fact that the momentum, p, of a photon with energy E is always p=E/c. (This must be true in order for the 4-momentum of the photon to be a null vector, a spacetime vector with an overall length of zero, as discussed in the previous article. The relationship is obvious when c=1, but it holds regardless of the units used.) So the wavelength of light is related to each photon’s momentum by:
L = h/p (3)
Equations (2) and (3) are the formulas de Broglie proposed for the period and wavelength of matter waves. Let’s see what such a wave might look like on a spacetime diagram.
Figure 4 shows a travelling sine wave with period T and wavelength L. We don’t actually know that a matter wave will ever take the form of a sine wave, but we might as well start with a simple possibility like this and see where it leads us. The third axis on the diagram represents the “strength” of the wave, or amplitude, traditionally labelled ψ (the Greek letter psi). Exactly what ψ means, physically, is something we’ve yet to determine. The equation for ψ in terms of x and t, the wave function, is:












