next_inactive up previous


Chapter 5: The Synthesis of the Elements in Big Bang and Stars

Historical Perspective

...it dawned on me that if the impulse of a theory is very strong, you need to have what I later call a 'don't worry hypothesis'. In order to do this you have to have a hypothesis that will allow you not of course to ignore facts, but to deal with what seem to be difficulties later, rather than rejecting the hypothesis out of hand as being impossible. ...you just say,' Well look, don't worry about this. It will be resolved later. It's done with no hands! How else would you expect it to be done.'

--------Sidney Brenner [My Life in Science].

Efforts to explain the origin of the elements prior to the 1950's were strongly influenced by the notion of ``cosmical abundances.'' Astronomer's views of the origin of the elements is one of pendulum swings.

Early in the 20$^{th}$ century, astronomers thought the stars had very different chemical compositions. They thought, for example, that the sun was made nearly entirely of iron. Sirius, the brightest star in the sky, was mostly hydrogen. These ideas came directly from spectroscopy, and the identification of absorption lines in stellar spectra. Examples are shown in figure 5.1. We can see that the star like Sirius (left) has strong hydrogen lines, and the lines of other elements are very weak. The solar-like star on the other hand, has a spectrum dominated by iron lines.

Figure 5.1: Spectra of Two Stars. The star on the left is like Sirius, and has only a few strong lines, all of hydrogen. Iron lines dominate the other spectrum, which resembles the sun. The closer one looks, the more iron lines one finds. It is no wonder early astronomers thought the sun might be made mostly of glowing iron.

In the modern jargon of computing, people speak of WYSIWUG (pronounced wiz-ee-wug), for ``what-you-see-is-what-you-get.'' In nature, your eyes often fool you, and what you see distorts what is true. This turned out to be the case with stellar abundances. The critical work was done by an Indian astronomer M. N. Saha (1893-1956) whose important work is not widely known. Every astronomer knows of him, however, because one of the basic equations of astrophysics bears his name. The Saha equation was published in 1920.

Saha showed how atoms can become ionized in a gas as the temperature increases. We can show this for iron (Fe), by a chemical equation:


\begin{displaymath}
{\rm Fe \Longleftrightarrow Fe^+ + e^-}
\end{displaymath}

This shows that an iron atom can lose an electron and become an iron ion--it becomes ionized. A free electron ($\rm e^-$) is released. The double headed arrow shows that the ``reaction'' can also go from right to left, and an electron can combine with an ion to form a neutral iron atom. In stars, there is an ionization equilibrium, and the number of atoms of iron that are ionized per second is the same as the number that recombine.

In the sun, the temperature is so high that almost all of the molecules are dissociated. What we see is a glowing a gas of free atoms, ions, and electrons. There is an equilibrium between Fe and Fe$^+$, but with enough neutral iron (Fe) to strongly absorb its characteristic spectrum.

If the temperature gets hot enough, the equilibrium shifts to the point where there are practically no neutrals left, and iron is even beginning to lose a second electron ( $\rm Fe^+ \Longleftrightarrow Fe^{++} + e^-$). This is the situation that has occurred in the atmosphere of the bright star Sirius.

Now it turns out that an ion has a completely different spectrum from that of the corresponding neutral atom. The strongest lines from Fe$^+$ are in the ultraviolet, and not in the region of the spectrum illustrated in figure 5.1. So the neutral iron lines are almost entirely gone from the spectrum of Sirius. If you allow for ionization, there is about as much iron in the atmosphere of Sirius as in the sun. To get this equality, you have to sum the neutral atoms of iron and the ions. Saha's work explained how this all happens.

The hydrogen lines are present in the solar spectrum, but are relatively weak. They dominate the spectrum of Sirius. This time the explanation is not one of ionization but merely excitation. A spectral line is formed when the electrons in an atom jump from one level of energy to another. These energy levels are characteristic of each element, ion, and molecule.

In the case of hydrogen all of the jumps out of its lowest level can only be made by relatively energetic photons--ones that occur in the ultraviolet. The earth's atmosphere absorbs this part of the spectrum and makes it unavailable to earthbound astronomers. Lines from the ground level of hydrogen were only seen by astronomers after the beginning of the space age.

Figure 5.2: Energy Levels and Spectra of Hydrogen. The arrows indicate quantum jumps made in the hydrogen atom as a result of absorbing photons. All of the jumps out of the ground level require photons in the ultraviolet. The jumps that cause absorption lines that can be seen in figure 5.1. Note how high the n=2 level is in energy.

The surface temperature of the sun is about 5800K, while that of Sirius is 10000K. So it is not until the higher temperature of the atmosphere of Sirius that enough hydrogen atoms are in the $n = 2$ level to cause the strong absorption lines seen in the spectrum.

Following Saha's work in 1920, astronomers analyzed the atmospheres of many stars, and reached the conclusion that all of them had just about the same abundances. There were a few stars whose atmospheres were clearly not the same in composition as the majority. These were set aside, perhaps as ``exceptions that proved the rule,'' and astronomers and geochemists began to speak of universal, or ``cosmical'' abundances.


Cosmical Nucleosynthesis-History

If the abundances of the elements were more or less universal, it was entirely reasonable to imagine that they had all been created in one major event, perhaps at the beginning of the history of the universe. It was well known by the 1930's that the most distant galaxies are moving away from us. It is a small mental jump from there to imagine that at some time in the past, matter was highly compacted in some primeval fireball. A Belgian Jesuit Georges Lemaitre (1894-1966) wrote a book about this called The Primeval Atom.

Some of the best minds in physics have tried their hands at a theory that would account for the origin of the chemical elements by some cosmic event. The most colorful of these was George Gamow (1904-1968). Gamow was born in Russia, but came to the US in the 1930's. He was one of the glorious ones of physics, and was at Göttingen in 1928 along with the founders of quantum mechanics and Maria Mayer.

Gamow's delight at doing physics is reflected in many popular books. Three that I enjoyed as an undergraduate in the 1950's were Birth and Death of the Sun (1940), Biography of the Earth (1946), and One, Two, Three, ... Infinity (1948). The illustrations in these books were informative, and often very funny. While many fine popularizations of science have been written, no one has surpassed Gamow for insight and pure fun.

Gamow tried very hard to make the chemical elements in the early universe. However, there was a formidable difficulty that neither he nor anyone else has been able to get around. There is no stable nucleus at either masses 5 or 8. This means that any process that starts with either protons or neutrons must somehow jump over those gaps. It is not so hard to imagine that the very light nuclei might be built up. Indeed, we now think most of the helium and deuterium in the universe was synthesized shortly after the big bang. But stars can do something the big bang can't, and that is make carbon out of helium.

We shall see later that important nucleosynthesis does take place in the very early history of the universe. Mostly helium and a few light nuclides emerge from this epoch. To see how heavier nuclei are made, we must discuss what goes on in stellar interiors.


The Source of Stellar Energy

Before Einstein's $E = mc^2$ the source of the sun's energy was a great puzzle. In the late 1800's it was thought that the sun 's energy came from gravitational contraction. As the matter of the sun slipped deeper into its own gravitational potential well, its atoms would get hotter and hotter, and could radiate this away.

The problem with gravitational energy is that it is possible to calculate how much of it has been spent in order for the sun to reach its present size. We can also calculate how long it would take for the sun to radiate away that much energy, given its current output, and that time is relatively short--only about 10 million years. On the other hand, there is geological evidence that the earth is much older than that. Early in the present century it became possible to date rocks by the method of radioactive decay, and those dates showed that some rocks were as much as a billion ($10^9$) years old. So an age for the sun of $10^7$ or even $10^8$ years just wouldn't do. Astronomers knew there had to be some additional energy source in the sun and many speculated correctly that it was by conversion of mass by Einstein's formula.

Just how this conversion took place, however, wasn't known until the late 1930's. Hans Bethe showed that hydrogen could be converted to helium by a cyclical process involving nuclei of light elements. It was for this work that Bethe won the Nobel prize for physics in 1967.

We will write the nuclear reactions using a chemist's notation. The mass number is written as a pre-superscript before the symbol for the element. Thus, $^{12}$C means a nucleus of carbon with mass number 12, and $^1H$ is the nucleus of a hydrogen atom, a proton. We use $\gamma$ for a gamma ray, and $\nu$ for a neutrino. The symbol $e^+$ is used for a positron. The celebrated Bethe carbon-nitrogen cycle went as follows:

\begin{eqnarray*}
\rm ^{12}C + ^1H &\longrightarrow &\rm ^{13}N + \gamma, \\
...
...+ \nu,\\
\rm ^{15}N + ^1H &\longrightarrow &\rm ^{12}C + ^4He.
\end{eqnarray*}



A $^{12}$C nucleus collides with a proton, creating an excited $^{13}$N nucleus, which jumps to the ground level by emitting a $\gamma$-ray. This $^{13}$N is unstable. It has too many protons. Consequently, the nucleus relaxes by emitting a positive electron and a neutrino in a form of $\beta $-decay (Section 4.2). The $^{13}$C formed in this way absorbs another proton to give $^{14}$N, and so on. In the last step of the cycle, a helium nucleus is released. The net result of Bethe's cycle is not to make helium out of four protons. The energy difference corresponding to the mass of four protons and a helium nucleus can power a star.

Deep in the solar interior, where these reactions take place, most electrons have been stripped by the high pressures and temperatures, so the nuclei collide like billiard balls. However, the nuclei are all positively charged, and repel one another. That repulsion must be overcome by having the nuclei move fast enough, and for this the temperature must be high.

Interestingly, the temperature at the center of the sun is not quite high enough for it to derive most of its energy from the Bethe cycle. Instead, it comes from the simple combination of protons in a fusion reaction. Essentially, the reactions are these:

\begin{eqnarray*}
\rm ^1H + ^1H &\longrightarrow &\rm ^2D + e^+ + \nu,\\
\rm ...
... + \gamma,\\
\rm ^3He + ^3He& \longrightarrow&\rm ^4He + 2^1H.
\end{eqnarray*}



Two protons combine to form a deuterium (or heavy hydrogen) nucleus. The deuteron adds a proton to become an isotope of helium, with two protons but only one neutron. The $^3$He nucleus is formed in an excited state, and it drops to the ground level with the emission of a $\gamma$-ray. Finally, two of the $^3$He nuclei combine to give a $^4$He nucleus and two protons. This scheme is called the $pp-chain$.

The net result of this process is essentially the same as that of the Bethe cycle, four hydrogen nuclei are converted to one helium nucleus, with the production of energy (and the emission of neutrinos). The positrons that are emitted, are quickly annihilated when they come in contact with ordinary electrons, and the result supplies heat to the solar interior. The neutrinos, however, escape from the sun completely, carrying their energy with them. These neutrons have been detected at the earth, though there have been fewer detections than were expected.

Today, few astronomers doubt that the major source of stellar energy is the conversion of hydrogen to helium. Most stars are busily doing this, and while this process is taking place, the stars lie along the main sequence of figure 3-4


Gamow's Notion of Tunneling

In an earlier section we mentioned that the wave function for a nucleon did not have a zero point, or node exactly at the edge of a well such as that shown in figure 4-7. If the energy of the nucleon corresponds to a value near the top of the well, something very interesting happens. The little tail of the wave may persist long enough within the region of the well that it may get outside the well with a finite amplitude.

Figure 5.3: Escape or Capture of a Nucleon by Tunneling. A quantum particle can penetrate the potential ``barrier'' in a way that is forbidden to a classical particle. The plot on the left corresponds to an energy that is not one of the allowed energies of a nucleon. On the right, the energy of the incoming particle is the same as that of an allowed nuclear level. In this case the probability of capture is greatly enhanced.

Conservation of energy would forbid a classical particle from being inside the ``forbidden'' region of the potential well, which we have shown in the figure by shading. For example, if you have a ball rolling in a bowl, the ball can't penetrate the walls of the bowl.

Quantum particles are fuzzy, though, and may violate certain classical laws. In particular, the conservation of energy may be violated for a brief time. If the particle is close to the top of the well, this time may nevertheless be enough for the particle to sneak through the ``barrier'' that would stop a classical particle. George Gamow clarified this phenomena, which became known as tunneling in the early days of nuclear physics.

Just as it is possible for a nucleon to leak out of a nuclear potential well by tunneling, it is also possible for one to sneak in. Indeed, all of the reactions involving charged particles that we have discussed in the previous section take place primarily by this process. There is an interesting story that goes with this.

In the 1920's quantum mechanics was in its infancy, and Gamow's notion of tunneling lay in the future. Astronomers were aware, however, that stellar lifetimes were sufficiently long that some powerful, but still unknown source of energy was required to power them. The British astronomer Sir Arthur Eddington (1882-1944), a pioneer in the theory of the structure of stars, speculated on this energy source in one of his popular books Stars and Atoms. He argued that the most likely energy source was the conversion of hydrogen into helium.

But Eddington's own work on the structure of stars gave reasonable estimates for their central temperatures. From these temperatures, one may calculate the energy of the protons--that energy is directly proportional to the temperature. One could ask if the energies in stars was sufficient to enable protons to get close enough to the nuclei that reactions such as those in the above section could take place. The answer given by experimental physicists in the 1920's was no.

It is straightforward to see the argument if we think of the potential well, and the barrier that it presents to incoming particles. To get into the well, to have enough energy for the nuclear forces to come into play, a classical particle would have to get to the top of the well. Stellar temperatures were not hot enough for that.

Eddington responded to this argument with a marvelous retort. We may assume he knew, with the insight of his genius, that hydrogen had to be converted to helium. It merely remained to find out how it was done. His critics, he said, could go and find a hotter place!!. Eddington made use of Brenner's "Don't Worry Hypothesis."

Gamow's theory of tunneling resolved the dispute, and made it possible for Bethe to clarify the reactions that provide source of the energy of most stars.

There is one more wrinkle on this tunneling that we need to discuss. The energy of an incoming proton might correspond to one of the allowed energies in the nucleus. When this happens, the probability of capture by tunneling is greatly enhanced. Physicists speak of such capture as being resonant, in tune, so to speak, with the natural vibrations of the nucleon inside the nucleus. We shall encounter a reaction of this kind in the next section.


Helium Burns to Carbon

We may recall that Gamow had great difficulty creating all of the chemical elements at the time of the birth of the universe. A major problem was that there are no known stable isotopes with masses either five or eight. Any scheme for the origin of the chemical elements would have to bridge this gap somehow.

When the hydrogen supply is exhausted in the centers of stars the remaining material is primarily helium. There will be a small fraction of heavier elements too, that came from the gas that originally made the star. These heavier elements are mostly unaffected by hydrogen burning, because of the charge on their nuclei.

In order for two protons to come together they must have enough energy to overcome the repulsion of the two positive charges. If they can get close enough, then they may be subject to the nuclear forces that allow the reaction scheme called the pp-chain to occur. The hotter the gas, the more energetic the particles in it. Now it takes more energy to get a proton close to a carbon or nitrogen nucleus than it does to get it close to another proton. Why? Simply because there is more electrical repulsion in the former cases. This is why the Bethe cycle isn't the dominant source of energy in the sun. In stars with central temperatures that are hotter than the sun's, the CNO cycle dominates.

After the hydrogen in the center of stars is used up, there is a central, spherical core of mostly helium. Hydrogen continues to burn at the surface of this core, in what astronomers call a ``shell source.'' This shell works its way outward. What happens next depends on the mass of the star, and for our purposes, we need not delve into the details. Eventually, the temperature of the cores will rise to the point where helium nuclei may fuse.

When two helium nuclei fuse, the nucleus that is made is $^8$Be (beryllium-8). This is right in one of the famous gaps that gave Gamow so much trouble. The half-life of $^8$Be is the incredibly short time of about $7 \times 10^{-17}$ seconds. It would seem an impossible gap to bridge. Two factors allow stars to get beyond the gap. First, the density of particles is very high, so that even though the $^8$Be nuclei split apart very quickly, they are made at a rapid rate. This means that at any given time there are a tiny number of $^8$Be nuclei around to be hit by another helium nucleus. The product of a $^8$Be nucleus and an $\alpha$-particle (helium) is the stable $^{12}$C nucleus. Astronomers call this the triple-$\alpha$ process.

When the rates of the reaction $\rm ^8Be + ^4He \longrightarrow ^{12}C$ were first calculated by physicists they were too slow. Fred Hoyle, of $\rm B^2FH$-fame investigated what would happen if another $\alpha$-particle were added to the freshly made $^{12}$C, and found to his surprise, that under the conditions visualized in the stars, that all of the $^{12}$C would be quickly converted to $^{16}$O. This left the astronomers with no way to account for the ubiquitous $^{12}$C of the universe--no organic chemistry, no life!

In one of those classic flashes of insight, Hoyle realized that the rate at which $^{12}$C is made had to be speeded up, and he proposed that the structure of the nucleus had to be changed. He therefore postulated the existence of an energy level in the $^{12}$C nucleus that would make the triple-$\alpha$ reaction go much more quickly. Hoyle predicted that the triple-$\alpha$ reaction would be resonant as discussed in the previous section.

With a resonant reaction, there would then be too much $^{12}$C made for all of it to be destroyed by the addition of another $\alpha$. William Fowler, the nuclear astrophysicist and member of the $\rm B^2FH$ team set out to see if Hoyle's prediction was correct. He found the predicted level just where Hoyle said it had to be.

It remains to explain why this process can't happen in the Big Bang. Certainly it was hot enough--take your pick of times, and the temperature can be as hot as you need. It was also dense enough. Again depending on how old the universe was, it could be as dense as you need.

We think we know the conditions in the Big Bang reasonably well. If that seems a little smug, then allow us to say that we know the conditions in our models quite well. These conditions tell us that temperature and the density were not right at the same time. When the temperature was about right for the triple-$\alpha$ process, the densities weren't high enough. A tiny bit of $^{12}$C was made, but it just wasn't enough to account for what we have today.


Alpha Nuclides

Much of the carbon that is produced in the triple-$\alpha$ process burned to $^{16}$O, which in principle could be converted into $^{20}$Ne, $^{24}$Mg, $^{28}$Si, and $^{32}$S. Indeed, if the temperatures were high enough the process could go on to include $^{36}$A and $^{40}$Ca. These isotopes are all simple multiples of $\alpha$-particles, so the number of protons and neutrons is equal. As more protons get into the nucleus, the electrical repulsion becomes more and more effective relative to the strong nuclear force.

We know from the nuclide chart that the way the nucleus accommodates a lot of charge is to mix in a few more neutrons than protons. This is why the plot of Z vs. N, figure 4-8 bends down from the original 45-degree slope. But all of the $\alpha$-elements we have discussed have equal numbers of protons and neutrons. How big can the nuclei get with $Z = N$ before extra neutrons have to be added? The answer is that $^{40}$Ca is the last abundant ``$Z = N$''-nuclide. This isotope is said to be doubly magic because both $Z$ and $N$ have the magic number 20.

There is another doubly magic isotope of calcium, $^{48}$Ca. Calcium has six stable isotopes, all stretching from the doubly magic $^{40}$Ca to the heavy, doubly magic $^{48}$Ca. No element lighter than Ca has this many stable isotopes. It is not until one reaches rather heavy nuclei of the element selenium (Z = 34) that one finds six or more stable isotopes.

In the region of calcium, the even-$Z$ elements have three or four stable isotopes. At calcium, the valley of beta stability runs between the neutron-magic numbers 20 and 28. Probably neither $^{40}$Ca nor $^{48}$Ca would be stable if it were not for the extra binding these nuclei get by having magic numbers of neutrons (as well as protons).

If an $\alpha$ is added to $^{40}$Ca, the result is $^{44}$Ti, which has a half-life of only about 70 years. It decays by emitting $\beta ^+$ particles through $^{44}$Sc to the stable $^{44}$Ca. At this point another $\alpha$ may be added to reach the stable $^{48}$Ti. Now $^{48}$Ti is by far the most abundant of the titanium isotopes, so it is tempting that it was reached in this way, by successive additions of $\alpha$'s. What astronomers think actually happens is a bit more complicated.

The abundance curves show a deep trough at $^{45}$Sc, so it is reasonable to assume that the nature of the synthesis of the elements changes from the classical ($\rm B^2FH$) $\alpha$-process that we have been describing.


Stellar Evolution

After stars exhaust hydrogen in their cores, they leave the main sequence (figure 3-4), and move to the red giant region. Hydrogen continues to burn, but in a shell. As the size of the shell grows, the mass within it increases. For a time, the entire core within the shell is maintained at the temperature of the hydrogen burning, and astronomers speak of an isothermal core. As the shell works its way out, the energy coming from the shell is not enough to power the star. The core then contracts, heats up, and helium burning starts by the triple-$\alpha$ process.

The helium burns mostly to $^{12}$C and $^{16}$O, but at the temperatures where these processes occur, not much $^{20}$Ne can be formed. Eventually, there is a mostly carbon-oxygen core, surrounded this time by a helium-burning shell. What happens next depends critically on the mass of the star. For stars that have about the same mass as the sun or less, the core contracts, but never gets hot enough for the next nuclear burning stage, which is called carbon burning.

In carbon burning, the $^{12}$C nuclei react with themselves. Initially, a compound nucleus is formed in the merger. This nucleus would have to have a mass number of 24, and 12 protons. The nucleus is therefore $^{24}$Mg, but it is not in its ground state. The $^{24}$Mg has a lot of excess energy as a result of the merging of the two $^{12}$C's. There are several ways the highly excited $^{24}$Mg nucleus can get rid of its energy. The most likely way is for it to emit an $\alpha$, and become $^{20}$Ne, a nucleus that might have been built up from $^{16}$O during helium burning if the temperatures got hot enough.

While carbon is burning, it is also possible to have reactions involving $^{12}$C plus $^{16}$O as well as $^{16}$O plus $^{16}$O. The latter is sometimes called oxygen burning, and takes place at temperatures a little higher than carbon burning, simply because of the higher charge on the oxygen nuclei. The major product of oxygen burning is not $^{32}$S (sulfur-32), although an excited, compound $^{32}$S nucleus is formed. The major product of oxygen burning is $^{28}$Si.

By the time oxygen burning is possible, the temperature has risen to such high values that some of the photons in the hot gas have high enough energies to cause nuclear reactions. Take a look at some of the reactions listed for hydrogen burning (Section 5.3). In a few cases you can see that after a proton has been absorbed, the resulting nucleus will emit a gamma ray--an energetic photon.

In the late stages of stellar evolution that we are now considering, the inverse of this process can take place. A nucleus may absorb a gamma ray, and emit a particle. If stars are massive enough, $\gamma$-rays can react with the byproducts of oxygen burning to release both protons and $\alpha$-particles. These, in turn, can react with ambient nuclei, building successively heavier nuclei all the way to those in the neighborhood of the iron peak. Initially, the doubly magic $^{56}$Ni is produced rather than $^{56}$Fe. But the former is unstable, and decays radioactively. First $^{56}$Co is formed. This $^{56}$Co is formed in an excited state which decays by emitting $\gamma$-rays.

The energy levels in the $^{56}$Co nucleus are well known, so that the energies (or alternately, the wavelengths) of the emitted $\gamma$-rays can be recognized. These wavelengths have been sought--and found--in the spectra of exploding stars.

As early as the time of $\rm B^2FH$ in the 1950's it was realized that the synthesis of nuclides near the iron peak would require extreme conditions of temperature and density. These conditions were most likely to be found in the violent stellar explosions known as supernovae.

Stars with sufficient mass--perhaps more than 8 times the mass of the sun--can become supernovae at or near the end of their lifetimes. The stars explode with massive outpourings of energy that sometimes allow them to outshine an entire galaxy. Astronomers have studied such phenomena for centuries, but the significance of these objects for nucleosynthesis only became clear at the time of $\rm B^2FH$.

Modern calculations of these late stellar evolution stages treat the hydrodynamics of the explosion as well as all relevant nuclear reactions. The abundances that emerge are rather closely fit by an approximate technique used by Fowler and his collaborator David Bodansky in the late 1960's which they called quasi-equilibrium. Later calculations of these processes were sometimes called explosive nucleosynthesis because they had to occur in supernovae.

The quasi-equilibrium calculations showed two outcomes from the rapid nuclear reactions leading up to the iron peak. On the one hand, nearly all of the material could be processed into nuclei with mass numbers near 56. This would happen at high temperatures and densities. The quasi-equilibrium calculations were made without a detailed stellar model. The workers merely asked what abundances would result from nuclear processes at specified (high) temperatures and densities.

At about the same time as Bodansky and Fowler's work on quasi-equilibrium, the American astronomer David Arnett made one of the early attempts to integrate calculations of a model star's late evolutionary phases. He included a detailed ``network'' of nuclear reactions that would occur in various zones of the star. Arnett's methods are the ones that are followed today. They are more fundamental than quasi-equilibrium because they are based on model stars and explicit reactions.

Arnett found that the star completely exploded, leaving no remnant. Most of the stellar mass was converted (eventually) to $^{56}$Fe.

This result became widely known. It was called the carbon detonation model, because the onset of the stellar explosion was caused by the violent ignition of carbon burning. At the time of the calculation, astronomers believed that exploding stars synthesized elements near the iron abundance peak (see figures 4-4, and 4-5). But the cosmic abundance of iron was much too low for every supernova to have processed so much material to the iron peak.

If the densities were somewhat lower during the quasi-equilibrium processing of matter, virtually no iron would be produced, and the net result would be a maximum abundance in the neighborhood of silicon. Bodansky and Fowler therefore found an optimum choice of temperatures and densities that led to abundances that matched those in the SAD remarkably well.

Later workers could not arbitrarily choose a temperatures and densities only to match the SAD because they were restricted to the conditions that held in their exploding stellar models. They could, however, modify the initial masses of their model supernovae, and for some time during the 1970's the goal was to find a stellar mass that would reproduce most closely the region of the SAD from nuclides somewhat lighter than silicon through the iron peak.

Already in the 1970's astronomers began to explore an alternative to the supernova involving a single kind of star. Today, the synthesis of elements from carbon and oxygen through the iron peak is thought to result from two distinct kinds of supernovae. On the one hand, a single star could evolve burning hydrogen through helium, carbon, and oxygen all the way to iron, more or less as we have described. In the last stages, the star explodes as a supernova. A second possibility involved evolution of stars in a binary system.

One member of a binary system might evolve through helium burning so that its core contained mostly carbon and oxygen. If this star were sufficiently massive, it could lose the overlying hydrogen and helium-rich as a result of what are called stellar winds. The resultant core could evolve into the white dwarf region, until its companion swells toward the red giant phase, due to its own evolution. In a binary system, what may happen is that mass can be transferred to the white dwarf.

White dwarfs have a property that was first explained by the Indian-American astronomer S. Chandrasekhar (1910-1995). If their mass exceeds a limit slightly larger than a solar mass pressure is incapable of supporting them, and they must collapse. This limit is now known by astronomers as the Chandrasekhar limit, and for this work, Chandrasekhar was awarded a Nobel Prize.

If enough mass is transferred in a binary system to a carbon-oxygen white dwarf, it may exceed the Chandrasekhar limit and collapse. This collapse, however, can initiate explosive carbon and oxygen burning, and lead to the synthesis of heavy elements, just as was the case with a single star. Ironically, supernovae that result from the evolution of a single star are called Type II, while those involving binaries are called Type I's or more recently Type Ia's. These types derive from the appearance of the spectra of exploding stars. In the past, astronomers also assigned types by the shape of plots of the supernovae brightness as a function of time, plots often referred to as light curves. These observations had nothing to do with models of supernovae--single or binary--and it is just an unfortunate historical accident that the terminology does not fit the models.


Summary

We have seen how the nuclides belonging to helium through elements on the iron peak can be synthesized in the interiors of stars. $\rm B^2FH$ thought it possible the universe began with pure hydrogen, with all elements synthesized in the interiors of stars. We shall see in the next chapter that galaxies probably began their history with nearly their current helium contents and a few light nuclides. Apart from these, the sequence of chemical reactions discussed in the present chapter have shown how the chemical elements and most isotopes could be synthesized in stars.

The entire sequence of reactions, from hydrogen and helium burning through the explosive synthesis of the iron peak represents a slide toward the most stable nuclei, in the neighborhood of iron. During these processes, energy is extracted from the reactions. In the next chapter we shall take up processes that can push past the most stable nuclei to produce elements beyond the iron peak.

Chapter 6: Synthesis of Heavy Nuclei, Beyond the Iron Peak

The reactions that we considered in detail in the last chapter would be called exothermic by a chemist--they release energy. The energy could be used as fuel by the stars. We calculate the energy that is released in a reaction, such as


\begin{displaymath}\rm 3\;^4He \longrightarrow ^{12}C,\end{displaymath}

by subtracting the masses on the right from those on the left, and multiplying by $c^2$, using Einstein's $E = mc^2$. Other kinds of reactions take place in nature that are called endothermic. In these reactions, it is necessary to supply energy somehow, in order for the reaction to go to completion. This is the situation for the synthesis of all of the chemical elements and their isotopes beyond the iron peak.

All nuclei beyond the iron peak are unstable, and in principle, if they were left alone in a laboratory, they would decompose and form iron. Don't hold your breath! At normal temperatures and pressures the radioactive the half-lives of what we think of as stable nuclides along the valley of beta stability are many, many times the age of the universe.

The direction of a chemical (or a nuclear) reaction can be changed by the conditions in the ambient medium. Both the temperature and the density of the medium are relevant. This is true of chemical reactions, and it is true for nuclear reactions as well. For example, under high enough temperatures (and low enough densities), the highly stable $^{12}$C nucleus would dissociate into 3 $\alpha$-particles. Energy would have to be supplied for this to happen, but if the energy were available, the reaction would take place. Similarly, with sufficiently high densities, it is possible to create nuclides beyond the iron peak.

George Gamow knew that the easiest way to make heavy elements was to add neutrons to existing nuclei. This is because there is no electrostatic repulsion between the positively charged nuclei and the neutral neutrons. He thought that somehow masses 5 and 8 were bridged at the beginning of the universe, and neutrons could than be added to make heavy elements. $\rm B^2FH$ used this method too, but they and their competitor A. G. W. Cameron used the interiors of stars as the site of the neutron addition.

These founders of the modern theory of nucleosynthesis realized that the isotopic abundances of the SAD could be explained by two quite different neutron processes. On the one hand, there was evidence that neutrons were added very slowly. This process eventually became known as the $s$-process. There was also evidence of a rapid neutron addition, or $r$-process. Interestingly, there was little evidence of neutron addition at intermediate rates. In the next two sections, we shall explore the $r$- and $s$-processes in detail.

Slow Neutron Addition--The $s$-Process

In stars that have passed through core hydrogen and helium burning, there will be a stage where a helium is burning in a shell. Within the shell will be mostly $^{12}$C. Outside the helium-burning shell there is a region of nearly pure helium, and beyond that, a region of unburned hydrogen. There may or may not be a hydrogen-burning shell between the helium and hydrogen rich zones. Such a star would be a red giant, and its outer envelope would be subject to hydrodynamical currents that help to carry energy from the interior to the surface. These currents could cause some mixing of the material of the three zones that are hydrogen, helium, and carbon rich.

If protons could be mixed into the carbon-rich zones, these protons could react to produce first $^{13}$N, which would decay to $^{13}$C. The scheme occurs at the top of the ``Bethe'' cycle of hydrogen burning. The $\rm ^{13}C$-nucleus is highly likely to accept an $\alpha$-particle (4He), and emit a neutron (n):

4He + 13C ----> 16O + n

Because the protons have been mixed into a helium, or $\alpha$-rich environment, this reaction is more likely than the addition of another proton to form $^{14}$N, as in the Bethe cycle.

Another possibility often discussed that could produce free neutrons assumes that $^{14}$N is created, or present in the inner zone to which $\alpha$'s are mixed. The following reactions might then occur:

\begin{eqnarray*}
\rm ^{14}N + ^4He &\longrightarrow &\rm ^{18}F + \gamma,\\
...
... \gamma,\\
\rm ^{22}Ne + ^4He&\longrightarrow&\rm ^{25}Mg + n.
\end{eqnarray*}



Neutrons that have been freed by either of these schemes may react with ambient heavy nuclei. Suppose the number of neutrons per cubic centimeter is not very large. Then the time between captures of a neutron would be relatively long, and the process of addition would be slow. In the process pictured by $\rm B^2FH$, the $s$-process starts with $^{56}$Fe, and neutrons are added one-by-one.

Figure 6.1: The Early s-process. Neutrons are added to seed nuclei, in this case taken to be $^{56}$Fe. In reality, all nuclides present at the time of neutron addition would be subject to reactions similar to the ones shown.

The beginning of the $s$-process is shown in figure 6.2. The stable iron isotopes $^{57}$Fe and $^{58}$Fe are first made. Addition of another neutron results in the unstable $^{59}$Fe, which $\beta $-decays to the stable $^{59}$Co. When $^{59}$Co adds a neutron, the unstable $^{60}$Co is formed, and it $\beta $-decays to the stable $^{60}$Ni. The next two heavier nickel isotopes could then be made, leading to the $^{63}$Ni, which has a 100 year half-life. In the classical $s$-process, neutron additions occurred on the order of one every 10,000 years, so the $^{63}$Ni would have time to decay to $^{63}$Cu.

By a sequence of reactions of this kind, successive nuclides are formed that follow the valley of beta stability. The basic assumption is that whenever an isotope is made with too many neutrons for stability, there is time for a $\beta $-decay to a stable nuclide. Mostly it is a $\beta ^+$-decay that occurs, but in some cases, as in the circuitous route from $^{63}$Cu to $^{65}$Cu, a $\beta ^+$-decay can occur.

The $s$-process path is not unique for at least two reasons. Even if there were an infinite amount of time for $\beta $-decay, there are some isotopes on the path that can either $\beta ^+$- or a $\beta ^-$-particle. This is illustrated for $^{152}$Eu, which can decay to either $^{152}$Sm or $^{152}$Gd.

Figure 6.2: Nuclide Chart Near Europium. The $^{152}$Eu isotope can emit either a $\beta ^+$ or a $\beta ^-$, and this leads to a branching of the $s$-process path. The alternate paths merge again at $^{153}$Eu. Note that $^{154}$Sm and $^{160}$Gd are not on the $s$-process path. These nuclides are made in the $r$-process, to be discussed below.

There is another process that can take place when a nucleus has too many protons. In addition to $\beta ^+$-decay, some proton-rich nuclei capture an electron from an inner shell. This process is called electron capture, or EC. For our purposes, we really don't need to make a distinction between electron capture and $\beta ^+$-decay. Both processes result in a shift on the nuclide chart that is down one unit in $Z$, and one unit to the right in $N$.

The Signature of the $s$-Process

There are considerable differences in the abilities of nuclides to capture nuclei. At the lowest level of approximation, the probability that a nucleus would capture a neutron would be proportional to the area of the nucleus as seen by an incoming neutron. The bigger that area, the bigger the probability the neutron will hit the nucleus.

Because of quantum mechanical effects, things are not so simple. A neutron may bump into a nucleus, but not be absorbed. There may be no easy way for the excited nucleus, that would momentarily be formed, to get rid of the neutron's excess energy. So the neutron would be quickly emitted, and it would look like it had just bounced off. Physicists nevertheless speak of the neutron capture cross section as a measure of the relative ability of nuclei to absorb neutrons. The larger the cross section, the more frequent the neutron captures, other things being equal.

Figure 6.3: The Concept of a Cross Section. The decrease in the intensity $\Delta I$ is related to the fraction of the area $A$ that is blocked by the neutron absorbers.

Figure 6.3 shows schematically how one would measure the neutron capture cross sections experimentally. A beam of neutrons comes in on the left, and hits a slab with area $A$ and thickness $\Delta x$. If the initial beam has an intensity $I$, it decreases by an amount $\Delta I$. This $\Delta I$ increases directly with the number of neutron absorbers, and their cross section, usually designated by the Greek letter sigma ($\sigma$). Physicists can arrange to have a known number of absorbers in the slab. Then they can measure $\Delta I$, and obtain $\sigma$.

For nuclides that are made through the $s$-process, we expect to find an inverse relation between the neutron capture cross section and the abundance of the nuclide. The larger the $\sigma$, the more likely a given species is to capture a neutron and become an isotope with one more neutron.

Nuclides with very low neutron capture cross sections are likely to increase in abundance relative to their neighbors. One or two of the isotopes of a heavy element may be made almost entirely by the $s$-process. Examples are $^{134}$Ba and $^{136}$Ba. The isotope between them, $^{135}$Ba is made by both the $s$-process and the $r$-process. We will show in the following section how it is possible to tell this from the nuclide chart. For the present, let us focus on these two isotopes of barium.

Physicists have measured neutron cross sections for both isotopes, and the cross section for $^{134}$Ba is 3.2 times larger than that for $^{136}$Ba. The abundances are just the reverse. The isotope $^{136}$Ba is 3.2 times as abundant as $^{134}$Ba. Enough examples of this kind were found to convince $\rm B^2FH$ and Cameron that slow neutron addition played an important role in the synthesis of heavy nuclides.

Those isotopes with magic numbers of neutrons have especially small neutron capture cross sections. These ``neutron magic'' species are the cause of the abundance peaks seen in figure 4-5, and marked with an `$s$.' The three peaks result from the neutron magic numbers $N = 50$, 82, and 126.

It is probable that neutron addition has played some role in the synthesis of lighter species too, for example, the elements between hydrogen and helium and the iron peak. However, the (SAD) abundances of these species seem dominated by the processes we have discussed in the previous chapter. These processes involved the fusion of light nuclei, or the general trend toward the maximum nuclear binding which occurs at the iron peak. Most of these reactions were exothermic, with the addition of charged particles to previously existing nuclei. For these reactions the neutron magic numbers are not as significant as for the pure neutron addition of the $s$-process.


Rapid Neutron Addition

We think the $s$-process takes place slowly inside stars that are in advanced evolutionary stages, but are still processing helium to carbon. There is substantial evidence in the SAD abundance pattern that nuclides were synthesized under the kinds of extreme conditions that would occur in supernovae explosions. This evidence is seen on the abundance plots of figure 4-5 in the form of the maxima labeled with $r$'s. Two of these maxima are readily seen, and occur at $A$-values smaller than those of the neutron magic, $s$-process peaks for $N$ = 82 and 126. The corresponding maximum ``behind'' the $N = 50$, $s$-process peak is mostly masked by the effects of the iron peak itself.

What could cause these strangely positioned abundance maxima?

Since there are $r$-maxima associated with two and perhaps three of the $s$-process maxima, we might guess that somehow the neutron magic numbers would play a role. In the $s$-process, the special stability of neutron-magic nuclei cause abundance maxima at these positions but on the valley of beta stability. Let us now extrapolate the $r$-maxima from the valley of beta stability, where the species now reside. We follow the opposite direction from that of $\beta $-decays, that is, down, and to the right on the nuclide chart. We show this in figure 6.4 for the $r$-peak near $A = 130$.

Figure 6.4: Extrapolation to the $r$-Process Path. Stable nuclides are shown by the solid shading. The valley of beta stability is toward the upper left corner of the diagram. The diagonal shading shows the $r$-process path explained in the next section. If nuclides reached the region with $A = 130$ by $\beta $-decay, they would have come from the direction opposite the heavy arrows. The isotope $^{130}$Cd is neutron magic. Neutron addition will slow down for the species with the magic value $N = 82$, and nuclides will accumulate at this position. When they $\beta $-decay back to the valley of beta stability, the $r$-process maximum near $A = 130$ will be formed.

Suppose there were some way to displace nuclides from the valley of beta stability shown in figure 4-8 right--to the neutron-rich side. We might still expect abundance maxima at the neutron magic numbers, just as for the $s$-process. If all of the nuclides then decayed back to the valley, we would be left with maxima just where they are found in the SAD. How could the nuclei be displaced in this way?

Consider what would happen if stable nuclei in the valley were suddenly bathed in a very dense sea of neutrons. Some of the neutrons would begin to be captured. With many neutrons nearby, there would not be time for the species to decay back to the valley before additional neutrons would be added. All of the stable nuclides would be carried onto the neutron-rich region of the nuclide chart. This is an example of the process we speculated about in the above paragraph.

Now let us ask how far it would be possible to displace the nuclides by this neutron addition. Consider what has happened. The region of the nuclide chart that we have called the valley of beta stability (figure 4-8) might more generally called an occupied region of the chart. It is occupied because it corresponds to a local mass minimum, as shown in figure 4-9.

Just as marbles would gather in the bottom of a bowl, the nuclides occupy the valley of beta stability. But this happens in the absence of the neutron bath that we have postulated in the paragraph above. The effect of this neutron bath is to push the nuclides up out of the valley. We can picture what would happen with our analogy of marbles in the bowl. Let the marbles that are originally in the bottom of the bowl be white. Now imagine what would happen if we could put some black marbles in the bowl that we could somehow supply energy to so they would be constantly rolling back and forth in the bottom of the bowl. In this thought experiment suppose the black marbles always had enough energy to roll up the side of the bowl by as many as four inches, but never more.

The black marbles would push the white ones from the bottom of the bowl. We imagine the white ones would lose energy by friction, so they wouldn't quite get up as far as four inches, but if we put in a lot of black marbles, the white ones would move completely out of the bottom of the bowl.

The neutrons in the bath that we have considered would act like the black marbles, and the nuclides from the valley would be the white ones. The limit of how far the white ones could be pushed is set by the energy of the black ones. The more energy we give them, the further up we can push the white ones. It would be the same way with the nuclides. For a given temperature and number of neutrons per cubic centimeter, the ``occupied'' region of the nuclide chart would be displaced just so far. The more neutrons there are flying about, the further from the valley the occupied region would be.

Eventually, the neutron bath would create an occupied path displaced to the right of the valley of beta stability. The energy necessary to do this would have to come from some external source. We can imagine that a stellar explosion would supply the energy. As the nuclides are displaced more and more from the valley, eventually the basic process of neutron addition would come into equilibrium.

We have encountered the notion of equilibrium in section 5.1. In that case, we considered the ``chemical'' reaction $\rm Fe \longleftrightarrow
Fe^+ + e^-$. At a given temperature, there is a relation among the amounts of iron, ionized iron, and electrons per cubic centimeter of material. The higher the temperature, the more Fe$^+$. A similar situation will hold for the neutron addition. We have the same kind of situation with our neutron addition. The only difference is that we deal with a nuclear reaction.

Let us consider a specific nuclide that has come into equilibrium on the neutron-rich side of the valley of beta stability. We consider $^{116}$Ru, an isotope of the element ruthenium, which has a mass number of 116. The relevant reaction is then


\begin{displaymath}\rm ^{116}Ru + n \longleftrightarrow ^{117}Ru + \gamma\end{displaymath}

In equilibrium, the rate of destruction of $^{117}$Ru by $\gamma$-ray photons would be equal to the rate of creation of $^{116}$Ru by neutron addition. Physicists call such reactions $(n,\gamma)$ reactions. The inverse process is called a $(\gamma,n)$ reaction.

Even though the creation and destruction rates of $^{116}$Ru are the same, the relative amounts of the two ruthenium isotopes can vary. If there are more neutrons, the reaction will be pushed to the right, and the $^{117}$Ru to $^{116}$Ru ratio will increase. If there are more $\gamma$'s the ratio will shift to favor the lighter isotope, $^{116}$Ru.

The same situation holds for $^{115}$Ru and $^{116}$Ru, as well as for $^{117}$Ru and $^{118}$Ru. In equilibrium, given a fixed number of neutrons and a fixed number of $\gamma$-rays per cubic centimeter, there be fixed fractions of all of these isotopes. However, the total number of ruthenium isotopes will be dominated by one or two of these isotopes. These one or two determine where the ``occupied'' part of the nuclide chart occurs at $Z = 44$ (ruthenium). In figure 6.4, we have drawn the schematic path for a maximum of abundances of the ruthenium isotopes at $^{116}$Ru and $^{117}$Ru. At the time of the neutron bath, all other Ru isotopes would be small or negligible compared to these two.

Nucleosynthesis by The $r$-Process

In the previous section, we have described a process in which neutron-rich isotopes can be made by what we have called a neutron bath. The process of adding neutrons and emitting $\gamma$-rays eventually comes to an equilibrium, when the $\gamma$-rays begin to disintegrate the neutron-heavy isotopes.

Actually, this is not a perfect equilibrium, because all of the neutron-heavy isotopes are subject to $\beta $-decay. This decay takes a given isotope one step up, and to the left on the nuclide chart. The resulting isotope is less neutron rich than its antecedent, but it has the same mass. Because it has one more proton, it is likely to accept another neutron, and increase its mass. This is essentially what happens on the $s$-process path, but the entire process takes place well off the valley of beta stability. Just as with the $s$-process, we have a piling up of nuclides near the neutron-magic numbers.

The near equilibrium $(n,\gamma)$- and $(\gamma,n)$-reactions combined with the $\beta $-decays populates a strip of the nuclide chart that is displaced to the neutron-rich side. Neutrons are constantly lost from the bath to make heavier and heavier nuclides. It is not known for sure what supplies this bath, but it is probably supplied during a supernova explosion. At some point, the explosion is over and the bath will no longer exist. The nuclides will then decay back to the valley of beta stability. However, the neutron-magic numbers will leave their imprint on the abundance pattern. The $r$-process peaks that are found in the SAD come from displaced, neutron-magic isotopes.

The three neutron-magic, neutron-rich isotopes $^{129}$Ag, $^{130}$Cd, and $^{131}$In are shown with heavy horizontal shading in figure 6.4. This is to indicate an excess population of this region of the nuclide chart during $r$-processing as a result of the $N = 82$ neutron shell closing. Heavy arrows are used for the $\beta $-decay paths from this region to suggest a maximum of the $A \approx 130$ nuclides on the beta-stability valley after the neutron bath is gone.

While the neutrons are being added, the nuclei will run along the $r$-process path toward ever heavier species. Just how far can they go? The most likely fate of very heavy neutron-rich isotopes is fission. No experiments have been done on the relevant isotopes. Their half-lives are all too short for laboratory studies. It is known that fission can be induced by adding neutrons to heavy species, such as uranium or thorium isotopes. These typically split into two fragment nuclei of unequal size. Both fragments are initially neutron rich, and move back toward the valley of beta stability by $\beta $-decay.

If this kind of fission occurred in a neutron bath, the fragments would help to define the neutron-rich $r$-process path, so that a kind of cycling process could occur in which heavier nuclides are produced by neutron addition. Fission at the heavy end then feeds nuclei back toward the lower end of the $r$-process path. Because of the nature of fission, if this cycling went on for a relatively long time, there would be few nuclides with $A$-values much smaller than about 110, because of the nature of what are called fission yields.

When nuclei fission the fragments have typical sizes that depend on what causes the fission. Very small fragments are quite rare. Asymmetrical fission may divide an original heavy nucleus with mass number A into two nuclides with mass numbers near $0.4A$ and $0.6A$ For $A \approx 240$ that would give $A \approx 96$ and $A \approx 144$. Negligibly few fragments would be expected that are much smaller than 96.

There is another kind of $r$-process that might occur. Suppose nuclides all along the valley of beta stability were exposed to a rich neutron bath, but only for a short time. Suppose there was time only for about a dozen neutron additions. Then all species would be displaced to the neutron-rich ``hillside'' of the nuclide chart, but when the bath dissipated, the isotopes would decay back to the valley. If this sort of process took place, the $r$-process peaks would still be evident. A general decline in abundance with $A$ would still be present, although all nuclides would increase in mass number by about 12 mass units.

The original abundance decline (e.g. of the SAD) would still be present after such a short, intense exposure. All species would just be displaced to higher $A$. After $\beta $-decay, the $r$-process maxima would still be evident.

The $r$- and $s$- and $p$-Contributions

In figure 6.4, we can see the $s$-process path, running from $^{121}$Sb through $^{132}$Xe. Nuclides along this path that may have contributions from either the $r$- or $s$-process are labeled $rs$. Examples are $^{121}$Sb and $^{126}$Te. On the other hand, there is no way for the $r$-process to make a contribution to nuclides such as $^{122}$Te, or ${123}$Te. Such isotopes are $s$-only isotopes, and are marked $s$. Neither $^{124}$Xe nor $^{126}$Xe can be reached by either the $r$- or the $s$-process. $\rm B^2FH$ assigned them to the $p$-process, which we shall discuss briefly below.

The $s$-process is considered to be much better understood than the $r$-process. Abundances of $s$-only species have enabled nuclear astrophysicists to determine the nature of the neutron exposures. Models of the $s$-process include temperatures and densities when the slow neutrons were added, and these conditions are expected in highly evolved, red giant stars. Thus we think we know not only the nature of $s$-processing, but also the astrophysical circumstances.

In the SAD, it is clear that there was no single exposure. The pattern of abundances can only be matched if it is assumed that a relatively small amount of material was exposed to many neutrons, while the majority of material received a much lower exposure. A ``typical'' chunk of material that formed the SAD was hit with 20 to 40 slow neutrons. It is possible that pulsing and mixing in red giants could provide the distribution of neutron exposures that is needed. More likely, the SAD pattern is the result of processing through several stars. Material from one star was returned to the interstellar gas of the galaxy, to be processed by the next stellar generation.

Predictions from our models agree with the observed abundances as well as one could hope. The goodness of the fits make it possible to disentangle the $s$- from the $r$-process contributions of the $rs$-nuclides. We can then make a plot of the individual $r$- $s$- and also the $p$-contributions. Such a plot is shown in figure 6.6.

Figure 6.5: Abundances in SAD of $r$- $s$- and $p$-Process Nuclides.

The general character of SAD abundances from all three processes shows one remarkable similarity. All abundances decline with roughly the same slope. One can make a rough prediction of the abundances from two of the processes, given the abundances of one of them--provided the vertical displacements are given.

Clearly there are many fewer $p$-nuclides than either $r$- or $s$-products. The latter two products are produced in roughly similar amounts--provided one smooths over the peaks due to the neutron magic numbers.

The distribution of $p$-process abundances may be the easiest to understand. What we need to do is figure out a way to make the $p$-nuclides from existing species. Suppose we take a small fraction of the existing nuclides, let us say from the $r$- + $s$-process, and convert them to proton-rich nuclides. Then the resulting abundances would have the same overall shape if we plotted them versus A as the combined $rs$-abundances would have. Take some specific cases. Suppose some process turned 1 nuclide in 20 of the nuclides made by the $r$- and $s$-processes near $A = 110$ into proton-rich species. At the same time, suppose the same process turned worked near $A = 180$. Then the ratio of the new $p$-nuclides near $A = 110$ would have the same ratio to the new $p$-nuclides near $A = 180$ as the original $r$+$s$ species had.

This is basically what people think must have happened. A $p$-process converted somewhere between 1 in 10 and 1 in 100 of nuclei beyond the iron peak into proton-rich species. The rough shape of the $p$-distribution resembles that of the $r$- and $s$-process because it was derived from preexisting nuclides that were made by the latter processes.

What processes might have converted this small fraction of $rs$-nuclei into proton-rich species? Early speculation centered on reactions in which a fast proton ejected a neutron or was simply captured. More recently attention has focused on high temperature processes that might take place in supernova explosions. It turns out that in models of the supernovae of Type Ia, conditions are right for the ejection of neutrons by $\gamma$-rays. These so-called $(\gamma,n)$-reactions move nuclei to the left of the nuclide chart, where the $p$-nuclides are located. Modern calculations show that these reactions can account for the small fraction heavy, proton-rich isotopes found in the SAD.


Some Unsolved Problems

It may be fair to say that the source of the neutrons is not known for certain for either the $r$- or the $s$-process.

We are reasonably sure the $s$-process takes place in red giants because of the observation of the unstable element technetium in the atmospheres of some of them. There are no known stable isotopes of technetium, and the longest lived isotope lasts only 4 million years--almost certainly much less than the stellar lifetime. Moreover the same stars that show technetium in their atmospheres also show an excess of those elements whose abundances are dominated by neutron-magic isotopes, strontium ($^{88}$Sr), and barium ($^{138}$Ba).

The $r$-process is a different story. Monumental efforts have been made to model the exploding stars that are probably the source of both the $r$-process and silicon burning toward the iron peak. But the problems are formidable, and even the fastest computers are not capable of including details that may be essential. Moreover, observational constraints on supernovae are not very strong. We observe mostly the ejected gas from these explosions, but the $r$-processing must take place deep down, in the invisible layers.

Experts have consistently said that the ``site'' of the $r$-process was uncertain. The supernovae explosions proposed by $\rm B^2FH$ and Cameron remain plausible. But Virginia Trimble, who has written comprehensive reviews of nucleosynthesis, said in 1991 that several other hypotheses were under active consideration. Of these, perhaps the most audacious was that $r$-process material might be releases if a neutron star were torn apart by a black hole!

Neutron stars are objects that have been packed so closely by gravitational forces that an entire solar mass of material is contained in a sphere some 10 kilometers in radius. Within such an object, the electrons have mostly been squeezed into the protons to form neutrons. If such a star were to pass sufficiently close to a black hole, it might be ripped apart, and some of its material released to the interstellar medium. Why would this material show the $r$-process pattern?

Studies of the nuclear fragments that result from spontaneous fission show a pattern closely resembling that of the $r$-process. It is therefore entirely plausible that ``neutronized'' matter, matter from a neutron star, and therefore made mostly of neutrons, would break up into similar fragments.

The entire concept of the $r$-process pattern rests on the assumption that the neutron-magic stability will be manifested in very neutron-rich species (figure 6.4), far off the valley of beta stability. While this still seems like a good bet, it remains to be proved. The magic numbers have the particular values they do when the nuclei are spherical. However, far from the valley of beta stability, they may distort in shape, and if this were the case, the magic numbers might shift or become ill defined.

We now turn to an insightful question raised by the cosmochemist Hans Suess and his coworkers. The $r$- and $s$-nuclides are thought to be made in vastly different sites--static red giants for the $s$-process, and most likely, exploding stars for the $r$-process. Why, then are the abundances of the two species so closely correlated? If you smooth over the $r$- and $s$-peaks of figure 6.6, the two curves would run nearly parallel to one another, and at nearly the same values.

Hans Suess and Dieter Zeh suggested that this pattern could arise if an $r$-process abundance distribution were weakly exposed to (slow) neutrons. Then a fraction of the $r$-nuclides would be converted to heavier species which would have the $s$-process signature. This hypothesis was never accepted by mainstream nuclear astrophysicists, and remains as of this writing, apocryphal.

At a meeting in Belgium in 1978, several colleagues and I spent an evening with William Fowler. Among those present was the German astronomer Hartmut Holweger. I have known Hartmut since the beginning of our careers because we both specialize in the determination of stellar abundances. We both thought the question raised by Suess needed more of an explanation than we had seen in the literature. After several drinks, we got the nerve to ask Fowler to comment on Suess's point about the $r$- and $s$-abundance correlation. Fowler said he could not explain it, but that there had to be some value for the ratio of the two. He shook his head, and said that as far as he could see, a ratio near unity was as plausible as any other--the correlation could just be due to chance.

Nearly twenty years later, I still puzzle over this question. The ``smoothed'' $r$- and $s$-abundances are not similar at only one value of $A$, but over the range shown in figure 6.6. Until the site of the $r$-process is clarified, I shall consider the $rs$-abundance correlation an open question.

Perhaps the most remarkable aspect of the concept of rapid neutron addition is that the general scheme suggested by $\rm B^2FH$ nearly half a century ago remains what it was originally. A plausible, coherent scheme.


Summary

Nuclei beyond the iron peak require endoergic reactions, they need energy rather than provide it to power the stars. Stars must supply this energy through gravitational contraction. Most nuclei beyond the iron peak are made by neutron addition under two vastly different sets of conditions. In static red giant stars, neutrons are added slowly. The neutron addition populates the valley of beta stability, with abundance maxima on the at the neutron-magic values of $N = 50$, 82, and 126.

Rapid neutron addition can populate a band displaced to the right on the neutron chart. Local abundance maxima are also created here at the magic numbers 50, 82, and 126, but when the neutron bath dissipates, the nuclei $\beta $-decay (up, and to the left, figure 6.4), creating $r$-process peaks at smaller values of $A$ than the $s$-peaks. Proton-rich nuclei are relatively rare, and may be created from a small fraction of the $rs$-species. The currently favored process for this is ejection of neutrons by $\gamma$-rays.

Currently, the $s$-process is understood better than either the $r$- or $p$-process. For the former, we can observe stars that are almost certainly mixing out the products of slow neutron addition as we observe them. We have mentioned two plausible sources of the slow neutrons. Probably both the $r$- and $p$-processes take place in stellar explosions. In spite of intense efforts to understand these critical astrophysical phenomena, the problems are formidable, and much remains to be learned.

Chapter 7: Non-Stellar Nucleosynthesis


A Universal Helium Abundance

The founders of nucleosynthesis thought it possible stellar nucleosynthesis could account for all of the chemical elements beyond hydrogen. According to their picture, the Galaxy would originally have been pure hydrogen, and gradually become enriched in heavy elements as stellar generations were born and died. Fred Hoyle, the `H' of $\rm B^2FH$ was a strong advocate of a theory (model) of the universe in which hydrogen was spontaneously created out of nothing. This was the steady-state theory, in which average properties of the universe did not change with time. We mentioned this theory briefly in Chapter I.

George Gamow and his coworkers, of course, tried to make it plausible that all of the chemical elements were manufactured in what Hoyle flippantly called ``the Big Bang.'' They were unable to transcend the problem of missing stable nuclides with masses of 5 and 8. During the time they struggled to get past this difficulty, it became clear that there was no such thing as a universal abundance pattern. While most stars, and gaseous nebulae had quite similar abundances, some very old stars had markedly different abundances from that of the sun. Moreover, the theory of stellar structure and evolution had progressed to a stage where it was possible to say that some very old stars were extremely metal poor.

This was just what one would expect if the elements were synthesized in stars. The oldest stars would have been formed from a gas that had not experienced much enrichment from stellar nucleosynthesis. They would be metal poor. Of course, we have to rule out the possibility that metals might have been added to the old stars during their great lifetimes. As far as we know, this possibility is not realized to any significant amount. Therefore, astronomers interpret the abundances that they see on the surfaces of old stars and pretty much indicative of their original compositions.

Helium itself is an element that is not easily analyzed in these old stars. Therefore, for about a decade following the successful ideas of $\rm B^2FH$ and Cameron, it was thought that helium was probably about as deficient in the old stars as some of the heavier metals. Then in the mid 1960's it became more and more difficult to demonstrate that the abundance of helium, anywhere in the universe, was significantly different from its solar value, about one tenth that of hydrogen.

As far as helium was concerned, therefore, it appeared that the abundance was universal. The theory to account for this was already in place, as a result of the earlier work following Gamow. It is only natural to try to explain a universal abundance by processes that took place in the universe as a whole. The difference between the current processes and those of Gamow and his colleagues is that the nucleosynthesis was now confined to the very light nuclei. Mostly $^4$He, but a little deuterium, $^3$He, and $^7$Li could be made in the Big Bang. Interestingly, one of the early, post-$\rm B^2FH$ writers on this topic was Fred Hoyle himself.

The seminal paper on the early production of helium and the light elements did not confine itself to what we now call cosmological production, or Big-Bang nucleosynthesis. Robert V. Wagoner joined Fowler's laboratory in 1963, and was soon at work with Fowler and Hoyle on the first extensive calculations of nuclear reactions that could happen in the early universe. But their 1967 paper was noncommittally entitled ``On the Synthesis of the Elements at Very High Temperatures.'' The authors gave considerable attention to the possibility that significant nucleosynthesis might occur in an early generation of very massive stars.

The kinds of ``massive'' stars considered by these workers are ones for which there is only the most fragile, circumstantial evidence. Astronomers today know the masses of a number of stars very accurately form their orbital motion about one another. The most massive of these stars is no more than about 25 times the mass of the sun. There is some evidence that a few peculiar objects could be as much as several hundred times the solar mass, but even these objects are small compared to the thousand or more solar mass stars considered by Wagoner, Fowler, and Hoyle.

Everything we know about the structure of stars tells us that thousand solar mass objects would not be stable. They would explode, or they would collapse to form black holes. Either way, there would be nothing left beyond their debris to say they ever lived and died. Wagoner, Fowler, and Hoyle explored the possibility that some of their debris could be the helium presently thought to be nearly universal in abundance. This remains a possibility, but it is not one favored today.

Helium is made in stars from hydrogen, mostly during the main sequence phase. The amount of helium that is made in this way is considerably less than that which is now thought to be the universal helium abundance. We have an idea of how bright our own Galaxy is, and also, roughly, how old it is. Therefore, we may ask how much hydrogen would have been burned into helium to keep stars in the Galaxy shining--at the current rate, for its age. This amount of helium is less than that in the SAD, which we now think is nearly universal.

We might suppose the Galaxy was composed of much brighter stars in the past. This might well be possible, but there is an additional problem that in all of the stellar evolution schemes that we have explored, the helium that is made inside the star is either burned to make carbon and oxygen, or remains locked in very low mass stars that never explode or return gas to the interstellar medium.

From a number of viewpoints, then, cosmological nucleosynthesis of helium and a few light nuclei is currently attractive. Our Big Bang models show that it is straightforward, we mostly believe now in the Big Bang, and there are no alternate ways to account for the amount of helium we now find.


The First Three Minutes

In 1965, Arno Penzias and Robert W. Wilson detected a universal background of microwave radiation. This discovery has had an enormous impact on modern ideas about cosmology and models of the universe. The radiation itself is similar to the microwaves used in telephone and computer-data transmissions. However, measurements show that it is exquisitely matched to the radiation that would occur naturally in a cavity with a uniform temperature. Such radiation is also called ``black body'' radiation. Penzias and Wilson measured it to be about $3^\circ$ Kelvin.

A universal radiation had been predicted by the incomparable Gamow and his collaborators, but their work had been largely forgotten by the time it was actually discovered. This discovery may be the strongest factor in the now general acceptance of the origin of the universe in a hot Big Bang, and for it, Penzias and Wilson won the Nobel Prize for physics in 1978.

The title of this section is from a very fine popular book on the early universe by the American physicist Stephen Weinberg (1933--), also a Nobel Laureate. Weinberg describes events in the history of the universe from about the time the fundamental particles known as quarks combine to form protons and neutrons until the time helium and a few other light nuclides emerge with ``universal abundances.'' Weinberg's book covers the universe from the time when its age is a few hundredths of a second to somewhat over three minutes.

Physicists who are concerned with fundamental laws of nature, have a deep interest in the history of the universe, even before the starting point of Weinberg's book. We need not discuss this topic here, as there are now many popular as well as technical books that deal with the very early universe. We have listed a few of them in an appendix.

It might come as a surprise to some that these early moments in the history of our universe are considered to be rather well understood by professionals. Can people seriously maintain they know anything about what went on before the birth of the earth, the sun, or the Galaxy--at a time when the entire universe was in a state of compression similar to that at the center of stars? We have mentioned such audacities in an earlier chapter. We told the reader that people who work on cosmological problems are mostly dealing with the hypothetical properties of models. All models represent simplifications of the ``reality'' they are designed to represent. Scientists have models of stars, atoms, and planets. Those scientists who do cosmology have models of the universe.

In the currently favored Big Bang models, the first three minutes are generally thought to be well understood. Some writers who discuss this period make no distinction between their models and ``the'' early universe. It is probable that many of them think their is no important distinction to be made! Perhaps the reader will forgive me if I follow this tradition, and simply speak of conditions in the early history of the universe rather than in our models of it. It is actually more important for the professional than the layman to be aware of the difference between the model and reality.

Most models have what are called parameters--numbers whose values should come from the real world. For example, when we make a model of the sun, the mass of the model is a parameter that is fixed by our knowledge of the mass of the real sun.

Density is a critical parameter that must be chosen for models of the early universe. If the density is fixed at some instant, when the temperature also has a given value, then the models will determine their values at other times. Both density and temperature decrease as the universe expands. It turns out that the universal abundances of the light elements are quite sensitive to the temperature and density in the early universe.

It is easier to make deuterium ($^2$H) in the early universe than in stars because neutrons are still available. When nucleons are first formed from their parent quarks, neutrons are nearly as abundant as protons. Eventually, they $\beta $-decay, but with a half-life of 10.3 minutes, longer than the ``Weinberg epoch.'' We might think it possible then to make helium, $^4$He, by simple neutron addition: first $^2$H, then tritium, or $^3$H, which would $\beta $-decay to $^3$He. One more neutron addition would lead us to an $\alpha$-particle, or $^4$He. The problem here is the $\beta $-decay of tritium, which has a half-life of 12.3 years. There is not time for this in the early universe.

The synthesis of helium in the early universe therefore requires some charged particle reactions. The reactions leading to helium are:

\begin{eqnarray*}
\rm ^1H + n &\longrightarrow &\rm ^2H + \gamma, \\
\rm ^2H ...
...+ ^1H, \\
\rm ^3He + ^3He &\longrightarrow &\rm ^4He + 2\;^1H.
\end{eqnarray*}



We have seen that high temperatures are necessary for charged-particle reactions, because the incoming particle must climb the coulomb barrier. On the other hand, high temperatures mean that energetic photons, or $\gamma$-rays will be present. The deuterium nucleus is rather loosely bound, and can be destroyed by $\gamma$'s.

The reactions above also go much faster when the densities of the reactants are high. Once the nucleons appear, their densities decrease with time as the universe expands. But the temperature is also dropping with the expansion. It therefore turns out that there is a critical balance between the temperature and the density in these models if the observed universal helium and abundances of the light elements are to be explained.

If the density of the nucleons is high enough, all of the reactions like the ones above go to completion, and there is no hydrogen or any of the lighter species left. Since there is lots of hydrogen observed in stars, such densities need not be considered. For lower nucleon densities, it is still possible that all of the deuterium and $^3$He could be used up making $^4$He. If we consider still lower densities of nucleons, then the amount of $^4$He would be much less than the amount that appears to be observed ``universally.''

Actually, the experts think that the helium abundance may have increased slightly as the universe aged. They have tried to find the lowest helium abundance in the universe, and take that as the amount that emerged from the Big Bang.

The theory of the early universe also gives the relative amounts of deuterium, $^3$He, and $^7$Li produced in the Bang. The latter nuclide is past the mass 5 gap, and only very small amounts of it come out of the models. Nevertheless, observations of all of these light species might be used to fix the nucleon density at a given time in the history of the early universe.

A model that predicts the proper amounts of light nuclear species that comes from the Big Bang also predicts the temperature while the relevant reactions were taking place. This temperature may be directly related to the number of photons per cubic centimeter. As the universe expands, this number of photons decreases, just as the number of particles in a cubic centimeter decreases. The ratio of particles to photons in a cubic centimeter will therefore be constant as the universe expands.

The current density of nucleons in the universe may be estimated by adding up all of the matter in stars and stellar systems. We may combine our theory of the early universe with the observation of the cosmic background radiation to provide an important check on this calculation. The number of photons in a unit volume depends only on the temperature of the radiation. Thus, we can calculate the present number of photons per unit volume. Our theory of the early universe also provides values of the temperature and the nucleon density at the time when the light elements were created.

The ratio of the number of photons to nucleons may be fixed by our models of primordial nucleosynthesis. The same ratio holds today. Since we know the number of photons per cubic centimeter today from the cosmic background radiation, we can easily calculate the corresponding number of nucleons. In this way, measurements of abundances of light nuclei are directly related to conditions in the early universe.


Abundances of the Light Elements

A geochemist once called efforts to find the primordial composition of the solar system a search for the Holy Grail. It seems astronomy has more than one example. Another Holy Grail is the primordial abundance set of the critical light nuclides, $^2$H, $^3$He, $^4$He, and $^7$Li. Only $^4$He (the $\alpha$-particle) is a robust nucleus. The other species are fragile, and can easily be destroyed in stars. In the sun they would be consumed at temperatures that occur roughly halfway to the center. It would not even be necessary for them to be in the hydrogen-burning solar core.

Unfortunately, the more robust $^4$He is less sensitive to likely conditions of the early universe than the more fragile light nuclides. It is therefore necessary to know its abundance very accurately be able to extract information from it about the early universe. It is also necessary to choose the objects to be analyzed with great care, because there is evidence of a net increase in the helium abundance with the age of the system analyzed. Recent work has focused on extragalactic emission-line regions with low abundances. Other popular sources for abundances of light species are old stars, and absorption line systems between us and quasars.

Experts in this field quote spectroscopically determined helium abundances to three figures--more accuracy than is justified in any other astrophysical spectroscopic abundance determination known to this writer! Deuterium and lithium abundances are more sensitive to the conditions in the early universe, but it is more difficult to interpret observational results. Lithium is clearly destroyed in stars like the sun, and it also is found in enhanced abundances in certain young stars. Deuterium is difficult to observe in atomic form. It can be observed in a variety of interstellar molecules, but the results are not easy to interpret. In molecule formation, the behavior of hydrogen and deuterium are significantly different. It is expected that deuterium will be preferentially concentrated in the molecular form.

Probably the best observations of deuterium come from observations of the atomic form in interstellar clouds. There is a wavelength shift between the lines of deuterium and ordinary hydrogen that allows spectral lines from two kinds of hydrogen atom to be seen separately. Measurements have been made from clouds in our Galaxy as well as distant systems. Most determinations of this kind give ``primordial'' deuterium to hydrogen in the range of one part in 100,000, with an uncertainty that may be as large as a factor of ten.

What these determinations of light element abundances show is a consistency with the notion of cosmological nucleosynthesis. Active work is devoted to improving the reliability of the determinations in the hope of constraining the cosmological models. The observed abundances, and the theory of early nucleosynthesis constrains the current density of nucleons. This density is important for the history of the universe.

If the density of matter in the universe exceeds a certain critical value--about one nucleon in 100,000 cubic centimeters--it will eventually cease its expansion and contract to another fireball. The best estimates from primordial abundances of the light nuclei are generally consistent with densities about one to ten per cent of the critical value.

We have already mentioned the plausible existence of large amounts of ``dark'' matter in some form other than protons and neutrons--forget the electrons they have a small mass. As far as we know, this dark matter would not influence any of the nuclear reactions in the early universe, so the abundances of the light elements have nothing to say about this putative material. It is revealed by its gravitational effects over large distance scales--galactic and extragalactic. Many cosmologists believe there is just enough of this dark matter to give our universe the critical density. Their reasons are more aesthetic than scientific. They like models that require this critical density and no other value. In this case, the universe would ``just'' expand forever--a tad more matter, and it would eventually collapse.


Cosmic Rays

Cosmic rays are very diffuse, energetic particles. They pervade the interstellar medium, but with numbers far lower than those of ordinary interstellar hydrogen gas. In spite of their rarity, their high energies make them important in astronomy. These energies are conveniently measured in a unit called the MeV, which stands for a million electron volts. One MeV is a typical energy for $\gamma$-rays from an atomic nucleus. By contrast, one `eV', one electron volt is typical of the excitation of the electrons in an atom.

Violent activity on the surface of the sun is known to create cosmic rays with energies up to perhaps 100 MeV. Beyond this limit, cosmic rays originate from sources that are still uncertain, somewhere within the Galaxy. Galactic cosmic rays are generated up to hugh energies--a plot of their energies may show $10^{20}$ eV at the extreme end. Ten of these particles have enough energy to light a 100 Watt bulb for a second. The higher the energy, the rarer the cosmic ray particles, of course. Typical measurements might be made on particles with several thousand MeV per nucleon.

Measurements of abundances in cosmic rays have been made from balloons and spacecraft. They show that the particles have an abundance pattern that resembles that of the SAD in many ways. Most of the cosmic rays are protons, and roughly 10 per cent are $\alpha$-particles. One of the significant differences between cosmic rays and the SAD comes in the composition of other light nuclei.

Figure 7.1: Abundances in Cosmic Rays Compared with the SAD. The vertical scales have been adjusted so that the carbon abundance is the same for both the SAD and cosmic rays.

In the SAD, the abundances of deuterium and $^3$He as well as of all isotopes of the elements of lithium, beryllium, and boron fall in a trough between helium ($^4$He) and carbon. In the cosmic rays, that trough is largely filled. A similar filling of the trough between silicon and the iron peak occurs 7.4.

It is relatively straightforward to see how the SAD abundance troughs might be filled in the cosmic rays. First, we may suppose that the cosmic ray nuclei began with an SAD composition. The rays are all bare nuclei, so unlike the corresponding atoms, they have an electrical charge equal to that of the number of their protons. Because of this charge, the motion of the cosmic rays are severely constrained by weak magnetic fields that occur within our galaxy. We are not sure how these weak fields originate but their presence is now well established.

Any charge that moves across magnetic field lines will experience a force that is perpendicular to both the field and the direction of motion of the charge. This is shown in figure 7.4. The component of the charge's velocity parallel to the field is unaffected. Thus the net motion of the charge must be in the direction of the field. In the Galaxy, the magnetic field lines are sufficiently tangled that the cosmic ray particles move along them like insects crawling along a tangled ball of wire.

Figure 7.2: A Charged Particle Spirals Around A Magnetic Field Line.

The most highly energetic cosmic rays may escape the Galaxy altogether, but these particles are very rare, and we shall not discuss them here. Typical cosmic ray particles, of the kind whose abundances are shown in figure 7.4, travel about the Galaxy for a time that it turns out to be possible to estimate.

The time a typical cosmic ray particle travels through the Galaxy is obtained by dividing the length of the path by the velocity. The velocity for these particles is just a little less than that of light. We may assume the velocity is known, and see how it might be possible to estimate the distance such particles travel.

We can get an estimate of the distance the cosmic rays travel in the Galaxy from their composition--indeed, from the filling in of the abundance troughs for the light elements and for the scandium low (figure 7.4). Let us make the assumption that the cosmic rays originally had something like the SAD abundance distribution. Then they were accelerated, by a mechanism that is still uncertain, perhaps by pulsars, perhaps by supernovae. The accelerated particles move swiftly through the Galaxy along the lines of magnetic force, and collide with material in the interstellar medium. The particles of the interstellar medium are also mostly atomic hydrogen.

Consider now a nucleus of one of the relatively abundance SAD species, carbon, nitrogen, or oxygen (or CNO). To one of these particles, a (stationary) hydrogen atom looks like it is approaching with a velocity near that of light. Indeed this approaching proton is capable of smashing into the cosmic-ray nucleus, and fragmenting it. The reaction is different in nature from the ones we have been considering in nucleosynthesis, where an excited or compound nucleus is formed. Physicists call reactions with very fast particles spallation, possibly from the middle english word spalle, which meant ``a chip.''

These ``chips'' from the CNO particles are typically protons, and the light nuclear species that fill in the light-element trough. It is possible to make measurements of such spallation processes in modern physics laboratories. What these measurements show is that in cosmic rays, we would expect roughly equal amounts of chips of various kinds to be formed by spallation. Moreover the observed number of chips in the cosmic rays would occur if the typical lifetime of the cosmic rays of a few million years. For lifetimes greater than this, more chips than are observed would be produced; for shorter lifetimes, the number of light fragments would not equal those observed in the cosmic rays.

Typical confinement times are also set by radioactive nuclides, such as $^{10}$B that are observed in the cosmic rays. The half-life of this isotope is 1.6 $\times 10^6$ years. The lifetime of these nuclei would be increased by relativistic time dilation in reasonable accord with the confinement times obtained from the light nuclei production by spallation.


Table 7.1: Isotopic Abundances of the Light Elements
Element $A$ % Abundance Process Abundance
        $\epsilon({\rm Cameron 1982})$ $\epsilon({\rm A\&G 198}9)$
H 1 $\approx 100$   $2.7\cdot 10^{10}$ $ 2.8\cdot 10^{10}$
  2   U? $4.4\cdot 10^5 $ $ 9.5\cdot 10^5$
He 3   U? $3.2\cdot 10^5 $ $ 3.9\cdot 10^5 $
  4 $\approx 100$ U,H $1.8\cdot 10^9 $ $ 2.7\cdot 10^9$
Li 6 7.5 $x$ $4.4 $ 4.3
  7 92.5 $x$,H,U $55.6 $ 52.8
Be 9 100. $x$ $1.2 $ 0.73
B 10 19.6 $x$ $1.8 $ 4.22
  11 80.4 $x$ $7.2 $ 17.0
C 12 98.9 He $1.1\cdot 10^7 $ $ 1.0\cdot 10^7$
  13 1.1 H $1.2\cdot 10^5 $ $ 1.1\cdot 10^5$

Table 7.1 gives abundance in the SAD according to two compilations. As of this writing, the 1989 compilation by Anders and Grevesse is still regarded as the definitive one by many workers. Edward Anders, now retired from the University of Chicago, has been one of the premiere analytical cosmochemists of our time. Largely through his efforts, a particular kind of meteorite known as a carbonaceous chondrite of type `CI,' is now accepted as the best source to use for the SAD. Nicholas Grevesse is a Belgian analytical spectroscopist who has specialized in abundance determinations from the solar spectrum.

The nuclear astrophysicist A. G. W. Cameron made numerous compilations of the SAD. If we look at his 1982 compilation of abundances for isotopes of Li, Be, and B, we see that one, $^7$Li stands out conspicuously. By number, it is nearly eight times more abundant than its nearest rival, $^{11}$B. This could be understood if most of the $^7$Li was made in the big bang, and a much smaller fraction by cosmic rays. People had made detailed calculations of the relative amount of Li, Be, and B that could be generated by cosmic rays, and roughly speaking, could accommodate all but the $^7$Li. The general picture of cosmic synthesis, plus spallation in the cosmic rays held together nicely.

The present situation is a little less clear, largely because the newer compilation has increased the boron abundance to the point that it is difficult to account for all of $^{11}$B with the cosmic-ray mechanism. Even though the abundance of $^7$Li has not significantly changed, our understanding of its numerical value has problems of its own.


Stellar Lithium

The abundances of lithium in stars has been studied over the last several decades by astronomers, primarily in the US and France. Their determinations show that stellar abundances of lithium vary by many orders of magnitude, so that it is difficult to choose an observational value that we could associate with the Big Bang. Some stars are enriched in lithium by a factor of 10 or more with respect to the SAD. Clearly there must be some process that manufactures lithium, in addition to the cosmic rays.

If the enhanced lithium were due to cosmic ray production, even locally, near some ``super lithium-rich stars,'' there should also be enhanced beryllium and boron, and there is little evidence to indicate this. Also, the lithium isotopes should be produced in comparable abundances. Unfortunately, the stellar spectroscopic observations mostly give only the combined $^6$Li plus $^7$Li.

It has been known since the work of the American astronomer George Herbig in the 1960's that the abundance of lithium in stars like the sun declines as they get older. These stars have hydrodynamical processes that are capable of mixing material from their surfaces down toward, but not completely to their centers. The temperatures, nevertheless are readily shown to be high enough to destroy lithium. So our current understanding is that the stars slowly ``cook'' their original lithium.

Figure 7.3: Lithium Abundances in Metal-Poor Stars. Lithium abundances are plotted as a function of the iron abundances in old stars.

The French husband and wife team of Monique and Francois Spite have suggested that certain very old stars may indicate primordial lithium abundances--in the sense of Big Bang production. These workers have considered stars with steadily decreasing abundances of heavy elements like iron. We astronomers know that stars with very low abundances are also very old, so the Spites were studying stars that were both old and metal poor. What they found in their sample was that the lithium abundances declined, and then reached a plateau (figure 7.3).

This plateau has generally been interpreted as a floor set by cosmological nucleosynthesis. The lithium content on this plateau represents the values that emerged from the Big Bang. If we assume that this is true, we get good agreement with the conditions in the big bang that yield the currently observed number of protons and neutrons in the universe relative to the number of photons. We discussed how this works in Section 7.2.


Summary

$\rm B^2FH$ attributed the origin of lithium, beryllium, and boron to an unknown, or ``$x$-process.'' Spallation processes by cosmic rays may produce most of the SAD components of these elements. This mechanism cannot produce enough $^7$Li, but for this isotope there can be appreciable cosmological (Big Bang) production. The theory of the production of light elements in the Big Bang is highly developed. With an optimism that springs from ignorance, some may say that the conditions in the Big Bang are very simple, and well understood. This is certainly true of their models!

These models of the early universe make a direct connection between the amount of light elements that can be produced and the cosmic background radiation now observed. We can use ``cosmological'' abundances of the of the light elements along with the current radiation to predict the density of protons and neutrons in our present universe. Needless to say, these values are in good agreement with one another, but there is much more uncertainty than we would like to have. We need better methods to be sure we are really using cosmological abundances, and we sorely need more isotopic abundances for all of the light elements. In spite of these shortcomings, the coherence of the overall picture is one of the strongest indications that our universe began with a big bang.

Chapter 8: Interstellar Clouds--The Birthplace of Stars


The World of Bart J. Bok

When I first became interested in astronomy in the 1950's there were very few introductory books that a serious young person could use to get started. One of my teachers suggested a series called The Harvard Books on Astronomy. This was quite a good series of books covering most aspects of astronomy as it was known in the 1940's and 50's. Of these books, one of the best know was written by Bart Bok with his wife Priscilla. It was called, simply, The Milky Way. It first appeared in 1941, and went through five editions, the last one published in 1981. Bok died in 1983.

Bart Bok was one of the most popular of this century's astronomers. He was never an intellectual leader like Hoyle or Fowler, but his warmth and enthusiasm made him popular with astronomical students as well as the general public. He was born in The Netherlands, a country that has had more than it's share of excellent astronomers. While he came to the US in 1929, he always had a heavy Dutch accent. The accent added to his charm. His command of the English idiom was extensive however, and his classes and audiences would delight in hearing some astronomical point clarified with the help of an American slang expression that would come out in a thick accent.

During his early days at Harvard, he once corrected a student about a historical point. Copernicus was not burned at the stake, as the student thought. He died comfortably in bed, having just seen his book that put the sun at the center of the universe. ``It must be nice to pop off like that," said Bok.

The earliest editions of The Milky Way were not very enthusiastic about the idea that stars were forming at the present time. Interestingly, Bok's first research efforts were devoted to finding The Distribution of the Stars in Space--a monograph that he wrote in the 1930's. This volume described how astronomers found out where the stars were, and how densely packed they were at various places in the Galaxy. Now, if there were a lot of dark, opaque material between the stars, it would obscure their spatial distribution. In these early days, Bok was optimistic that such obscuration was relatively minor. His opinion on this question changed dramatically as his career evolved, and eventually the focus of his research and writing was on interstellar material and the role that it played in star formation.

Why, in the 1940's, did Bok think there was little star formation going on in the Galaxy? First, I suppose, there was still a lingering notion that the matter between the star was a nuisance. It prevented workers from finding the distribution of stars in space. Second, astronomers were just beginning to come to grips with the modern concepts of the nuclear fuel of stars and its effects on stellar lifetimes. There is a fascinating chapter in the second edition of Bok and Bok entitled ``How Old is the Milky Way?". In it, the Bok's argue at one point for an age of 10 billion years--not far from our present estimates. But later, they seem to fix on an age of about 3 billion years, which is about what was obtained from measurements of the expanding universe at that time. The Boks thought most of the Milky-Way stars were born at about the same time, about 3 billion years back.

Strangely, they present arguments that show this could not possibly be true! Astronomers have known since the early decades of the present century that there exist stars whose brightnesses exceed that of the sun by many orders of magnitude. Bok cites a figure of 10,000 for the ratio of the luminous output of some of these stars to that of the sun. Given that the available fuel of these stars could be no more than 100 times that available to the sun, we would expect their lifetimes to be at least 100 times shorter than the solar lifetime.

Even if these supergiant stars were to use all of their available hydrogen, a calculation made by the Boks shows the most luminous of them could burn that fuel in no more than a few hundred million years. In the 1941 edition of the Milky Way, the Boks argue that this calculation must show that all stars in the Galaxy are very young. Somehow, they still managed to conclude the Galaxy was about three billion years old.

In a special chapter written for the 1945 edition of their book, the Boks called attention to work done by the Princeton astronomer Lyman Spitzer in collaboration with Bok's Harvard colleague Fred Whipple. These workers focused their attention on interstellar matter, and concluded that there was ample material in the plane of the Galaxy to account for the formation of stars. This idea became a part of Bart Bok's general outlook, and he spent much of his later career searching for and investigating regions in the Milky Way where stars were being born.

If a modern astronomer were to play a word association game, the name Bok might immediately suggest the word ``globule.'' These are dark regions, often seen sharply silhouetted against the field of bright gas. Typical sizes are the order of 0.01 to a few tenths of a parsec, and they may contain some tens of solar masses of material. The later editions of The Milky Way suggested these were protostars--stars in the process of formation. Today, it is probably fair to say that globules may not be accepted specifically as protostars, but it is generally believed they are intimately related to the process of star formation. They may be the birthplaces of stars rather than an individual star.

Figure 8.1: Bok Globules. These are the dark areas silhouetted against the bright nebulosity called IC 2944 by astronomers. It is found in the southern constellation of Centaurus. Figure courtesy of the Anglo-Australian Telescope.

Bok's biographer entitled his book The Man Who Sold the Milky Way. Certainly, his zeal was passed on to his students and young associates. We know a great deal more about our Galaxy and the stars that are in it because of Bart J. Bok.


Giant Molecular Clouds

Serious study of the interstellar medium has only been possible after the full electromagnetic spectrum became available to astronomers. Good progress with many aspects of stellar astronomy was possible before that time because the sun is a typical star, and most of its light comes through the visible atmospheric window. In the case of hotter and cooler stars, astronomers could make allowances for the radiation that did not get through the atmosphere.

Important studies of the interstellar medium were made, of course, by ground-based photography. Ingenious analytical tools were developed for the study of emission nebulosities in the 1920's and 1930's, and the names of many famous astronomers are associated with that work: Menzel, Baker, Aller, Goldberg, Spitzer, Zanstra, to name just a few. However, most of the mass of interstellar material in the plane of our Galaxy is relatively cold, and its detailed study began after the availability of the radio window. This window is a region of the electromagnetic spectrum used by radio telescopes. In the 1950's extensive mapping began of neutral hydrogen gas using the radiation emitted at 21 centimeters when an electron reverses its direction of spin.

After some decades of study, microwave techniques became sufficiently refined that they could be applied to astronomical investigations. Microwaves are electromagnetic waves shorter than radio waves, but longer than those of infrared and optical light. The author was a graduate student at the University of Michigan in the late 1950's when Alan Barrett predicted the possibility of detecting lines from interstellar molecules. He presented his work at a symposium called The Next Five Years of Radio Astronomy, and at the time, his paper seemed speculative. However, within five years, Barrett had moved to MIT, and made the first detection of interstellar OH molecule. Soon thereafter, Nobel Laureate Charles Townes and his coworkers had discovered interstellar ammonia ($\rm NH_3$), and water ($\rm H_2O$).

Real progress came after 1970 when the CO molecule could be studied. Soon after its identification in the great nebula of the constellation Orion, it became clear that the plane of the Galaxy contains numerous Giant Molecular Clouds, or GMC's. These clouds are huge objects, with masses $10^4$ to $10^6$ times that of the sun. The large ones may be a few tens of parsecs in diameter. About half of the mass of the disk of our Galaxy is in the form of these clouds, and the other half is in the form of stars. In the halo, or the galactic center, less of the mass is in this non-stellar form.

Most of the mass of the GMC's is not CO at all, or course, but molecular hydrogen, $\rm H_2$. This molecule is very difficult to observe at radio or microwave wavelengths, however, because it is formed from two identical atoms. Physicists call such molecules homonuclear.

In order for an interstellar molecule to interact with radiation, to absorb or emit photons, it must have some net separation of electrical charge. In a molecule like CO, the electrons tend to spend a little more of the time near the carbon atom than the oxygen. The molecule is said to be permanently polarized, that is, have a negative pole toward the carbon and a positive pole toward the oxygen. It is this net separation of charge, this polarization, that makes it possible for the CO molecule to emit and absorb photons. All radiation originates in the motion of electrical charges. Homonuclear molecules have no permanent charge separation because there is no reason for the charge to prefer one of the identical atoms to the other.

Astronomers had observed lines from molecular hydrogen with the help of the Copernicus satellite, launched in 1972 under the direction of the Princeton astronomer Lyman Spitzer--the same Spitzer whose early work directed Bok's attention to regions of the Galaxy where stars might be forming. Molecular hydrogen will interact strongly with radiation if the photons are sufficiently energetic to cause excited states of the electrons. In this case, the photons see one of the electrons as an individual charge which can be set into motion, and the symmetry of the molecule is no longer relevant.

Observations from the Copernicus satellite only sampled the peripheries of the GMC's. That is because these clouds are also the location of relatively large quantities of interstellar dust. It is almost certain that the clouds could not exist as they are now, without this dust. We are not too sure about the composition of this dust or where it came from, but it is critical for the existence of a molecular cloud.

The ultraviolet radiation studied by the Princeton group is blocked by the dust of the GMC's. This same radiation, would destroy molecules like CO and many other species that have been found within these clouds. A CO molecule would hardly last more than a few thousand years if exposed to ultraviolet radiation that is ``typical'' for our region of the Galaxy. This is true for many of the impressive variety of molecular species identified in these clouds. Table 8.1 lists those found through 1992, according to William Irvine, and astronomer at the University of Massachusetts.

Table 8.1: Interstellar Molecules. After Irvine (1991, 1992).
Simple hydrides, oxides, sulfides, amides, and related molecules
H$_2 $ CO NaCl$^* $ CC
HCl OCS AlCl$^* $ CS
PN SO$_2 $ KCl$^* $ SiS
H$_2 $O SiO AlF$^* $ SiH$_4^* $
H$_2 $S NH$_3 $ CH$_4 $ HNO
Nitriles, acetylene derivatives, and related molecules
CCC$^* $ HNC H$_3 $CNC HN=C=O
CCCCC$^* $ HCN H$_3 $CCN H$_3 $C$-$CH$_2-$CN
  HC$\equiv$C$-$CN H$_3 $C$-$C$\equiv$C$-$CN  
  H$_2 $C$=$CH$-$CN    
CCCO H(C$\equiv$C)$_2-$CN H$_3 $C$-$C$\equiv$CH  
  HN$=$C$=$S    
CCCS H(C$\equiv$C)$_3-$CN H$_3 $C$-$(C$\equiv$C)$_2-$H C$_4 $Si$^* $
HC$\equiv$CH H(C$\equiv$C)$_4-$CN H$_3 $C$-$(C$\equiv$C)$_2 $CN?  
HC$\equiv$CCHO H(C$\equiv$C)$_5-$CN H$_2 $C$=$CH$_2^*$  
H$_2 $C$=$C$=$C H$_2 $C$=$C$=$C$=$C HCCNC CCCNH
Aldehydes, alcohols, ethers, ketones, amides, and related molecules
H$_2 $C$=$O H$_3 $COH HO$-$CH$=$O H$_2 $CNH
H$_2 $C$=$S H$_3 $C$-$CH$_2-$OH H$_3 $C$-$O$-$CH$=$O H$_3 $CNH$_2 $
H$_3 $C$-$CH$=$O H$_3 $CSH H$_3 $C$-$O$-$CH$_3 $ H$_2 $NCN
NH$_2-$CH$=$O H$_2 $C$=$C$=$O (CH$_3 $)$_2 $CO?  
Cyclic molecules
C$_3 $H$_2 $ C$_3 $H(cyclic) SiC$_2^*$  
Ions
CH$^+$ HCO$^+$ HCNH$^+$ H$_3 $O$^+$
HN$_2^+$ HOCO$^+$ SO$^+? $ HOC$^+? $
  HCS$^+$   H$_2 $D$^+? $
Radicals
OH C$_3 $H(linear) CN HCO
CH C$_4 $H C$_3 $N NO
C$_2 $H C$_5$H H$_2 $CCN SO
CH$_2?$ C$_6$H C$_2 $S NS
SiC$^* $ HCCN$^* $ CP$^* $ NH
SiN$^* $ C$_2 $O    

In addition to shielding the molecules from ultraviolet radiation, astronomers think that the dust plays an important role in forming the dominant $\rm H_2$ molecules. Interestingly, $\rm H_2$ is thought to form on dust particles--the dust acts as an important catalyst for the reaction $\rm H + H \longrightarrow H_2$. Many of the other molecules can be formed by reactions in the gaseous phase, but the most common molecule requires dust. What is the nature of this dust, and where did it come from?


Dust

We know a lot about interstellar dust and--we don't understand everything we know. We would like to know what the dust grains are made of and how they are made. Strangely, it seems there is more agreement about how they are made than what they are made of. So let's start with that.

Almost all models of dust formation begin with the extended envelopes of stars in late stages of their evolution. It is rarely suggested that dust might form in the cores of GMC's. The reason traditionally given for this is that the densities in these cores, or clumps is not high enough for the dust to form. This may well be true for those regions where the densities have been sampled, but as we shall see, there is every reason to think that stars are being formed in these cores, and this means the densities must reach much higher values than those cited by the pundits as being too low.

The extended atmospheres of very cool stars can have temperatures well below 2000 Kelvin, and at these temperatures solids will begin to form out of the gaseous phase. The process is the opposite of what is called sublimation, which is what happens to dry ice or solid CO$_2 $.

It is possible to detect solid materials in the neighborhood of stars in a variety of ways. One of the most common is simply to look at the radiation that is emitted from the star. A star that is surrounded by a dust envelope will emit much more radiation in the infrared than a similar object without the surrounding dust. This is because the dust absorbs the radiation that is coming from the star and then re-radiates it. The radiation that comes from the dust has much more of its energy in the infrared than the radiation coming from the star. The cooler an object, the more it will radiate at longer (e.g. infrared) wavelengths, and the dust is cooler than the stars.

Another way to investigate the dust is actually to look at the shape of the spectrum for features that will identify the nature of the solids. This method is similar to that used to identify atomic absorbers in a stellar or solar spectrum (cf. figure 4-2).The main difference is that in the infrared, the features are not so numerous, and they tend to be broad. Workers in this field tend to be less certain about the identification of specific materials than what are called functional groups of atoms. These atoms could occur in a variety of molecules, but they give rise to characteristic absorption or emission features in the infrared. Typical examples of such groups are Si--O or O--H. In both cases, the stretching of the molecular bond causes absorptions or emission at specific ranges of infrared wavelengths.

Astronomers have known for decades that a critical factor in the determination of the chemistry of cool stars was the ratio of the abundance of the element carbon to that of oxygen. This is because the CO molecule is very tightly bound. Therefore, if there is more C than O, most of the oxygen will go to form CO, leaving some C to form compounds such as $\rm C_2$. It is more common for there to be more O than C, and when this is the case, most of the C is tied up in CO, and there is O left over.

Most stars have C/O ratios about 0.4, like the sun, and the spectra of the cooler objects show very strong absorptions due to the TiO molecule. The much rarer carbon stars show $\rm C_2$ features, and very strong CN and CH. The infrared spectra of these two different kinds of stars show characteristic features that one might expect. For example, SiO features are seen in the spectra of oxygen-rich stars, while SiC has been identified in carbon-star spectra.

Once the dust is made by the stars, it may be dispersed to the general interstellar medium of the Galaxy. Observations show that it is very highly concentrated to the galactic plane, but it is a ubiquitous feature in the planes of all spiral galaxies (cf. figure 3-2).It has been known for decades that interstellar dust dims and reddens the light from distant stars. It also causes a small fraction of starlight to be polarized.

From the way that starlight interacts with the grains it is possible to say something about typical grain sizes. Most are about a tenth of a micron--$10^{-5}$ cm in radius--pretty small, but very effective in dimming starlight.

We can tell from the spectra of certain stars what the composition is of the gas between us and the stars. The analytical spectroscopy of interstellar matter has a long history. It was recognized in the first decade of the twentieth century that the certain lines in the spectra of double stars behaved strangely. Since double stars orbit one another, their spectra show radial velocity (Doppler) shifts that oscillate, reflecting their orbital motion. In the 1904 edition of the Astrophysical Journal, the German astronomer Hartmann called attention to lines of ionized calcium whose wavelengths did not change along with the lines from the star in Orion he was observing. He properly attributed this absorption to an interstellar cloud between the earth and this star.

In the 1930's it became known that certain interstellar lines were due to diatomic molecules, such as CH and CN. The ionized molecule, CH$^+$ was also identified. The Nobel laureate Gerhard Herzberg is a molecular physicist who has had a continuing interest in astrophysical problems. Many astronomers have learned both atomic and molecular physics from Herzberg's monographs. These books are unbelievable in the amount of detail that they present. In spite of the complexity of the subjects discussed in these volumes, there have remained the authoritative sources for four or more decades. Herzberg discussed interstellar molecules in the 1950 edition of his book on diatomic molecules, and on page 496, he mentions that the excitation of the CN molecule is 2.3 Kelvin. This temperature is surely due to the cosmic microwave radiation, discussed in Section 7.2. In some sense, Herzberg had confirmed Gamow's prediction of a background radiation. The value that he found was better than Gamow's which ranged from 5 to 28 Kelvin, depending on very uncertain assumptions about the present age of the universe. The current value of the temperature is securely fixed at 2.7 Kelvin.

The modern era of analytical work on interstellar matter was begun by Spitzer and his colleagues at Princeton, with the help of the Copernicus satellite. Their work showed that the gas was depleted in most elements heavier than hydrogen and helium. They soon concluded that the missing elements must be in dust grains. It would be truly surprising if the interstellar gas were really metal poor. We can observe and determine the compositions of very young stars, that have, so to speak, ``just'' formed from the interstellar gas. As far as we can tell, their compositions are very similar to the SAD. How could such stars form from a metal-poor gas? Surely the young stars form from gas and dust, and the dust is vaporized and mixed inside the star. The spectrum therefore shows a normal composition.

The chemical composition of the dust grains is presently uncertain, but there are many ideas about what it might be. The depletions of the interstellar gas tell us that the grains must be made of common, abundant elements. Ideas therefore focus on some form of solid carbon and/or silicate minerals. There is evidence for these chemicals in circumstellar envelopes (as opposed to interstellar gas), so probably they are also a part of the dust in clouds and in the general interstellar field. Textbooks on astronomy describe a model in which carbon or silicate grain cores have icy coatings, or mantles.

A great deal of effort has gone into the interpretation of what astronomers call the ultraviolet extinction curve. This is a graph of the relative amount of starlight that is dimmed in interstellar space, mostly by dust grains. The absorption is plotted against the reciprocal of the wavelength of the light. Wavelengths (in Angstrom units) are indicated across the top of part (b) of the figure. Basically, the shorter the wavelength, the greater the extinction or dimming. However, there is a very prominent hump near 2200 Angstroms. Calculations and experimental data suggest this hump may be due to some form of solid carbon, for example, graphite, or soot.

Figure 8.2: The Interstellar Extinction Curve. The ``typical'' galactic extinction curve is shown as a solid line. The hump near 2200 Angstroms resembles absorption by graphite in laboratory measurements, but the interstellar feature could be due to some other substance or a mixture with graphite. Not all interstellar absorption shows this hump so strongly. Additional curves are shown for the Large (LMC) and Small Magellanic Clouds (SMC) as well as for the direction to a star named 30 Doradus.

New information on interstellar grains has come from the analysis of very small fragments of meteorites that are thought to have originated in interstellar space, but never melted and mixed with the general material of the solar nebula. Cosmochemists call these presolar grains, because they were probably solidified before the formation of the solar nebula. The most characteristic signature of presolar grains is an isotopic composition very different from that of the SAD. One class of clearly identified presolar grains are found in carbonaceous meteorites, and this supports the notion that carbon in some form is an important constituent of interstellar dust.

Would dust grains last forever? The answer to this is surely ``no,'' but it isn't easy to give the kind of estimate that we can for a CO or $\rm H_2$ molecule. Grains appear to be pretty tough. They can be found in regions where the interstellar gas is hot ( $\approx 10,000$K) as well as in the cool clouds. There is some very hot gas, though, that has been heated by shock waves from stellar explosions. In this gas, the heavy metal content is more nearly normal, and a reasonable interpretation is that the grains have been evaporated.


Cloud Formation and Dissipation

The Soviet astronomer I. S. Shklovsky was a powerful intellect. He flourished during the time of the Cold War, and died just about at its end. Nevertheless, he was very well known to Western astronomers because he wrote important books that were translated into english. One of his later books was called Stars: Their Birth, Life, and Death. The english edition appeared in 1978, when the interests in interstellar molecular chemistry were rapidly accelerating.

Shklovsky's training was in radio astronomy. In the 1950's he had written a fine monograph entitled Cosmic Radio Waves. At that time, and for many years thereafter the only star seriously studied by radio techniques was the sun. In the mid 1960's Shklovsky had written a book on the solar corona, and the Russian edition of his book on stars was published about 10 years after that. Shklovsky showed that he had mastered the literature and the vocabulary of stellar astronomy, not something that is easy, especially for one no longer young. But Shklovsky was exceptional. In addition to his very competent writing about stars, he brought interesting new ideas.

Shklovsky thought GMC's might be made with the help of interstellar magnetic fields. There is ample evidence for such fields. They are quite weak, but they produce measurable effects, and have been extensively investigated. The field lines lie primarily in the plane of the Galaxy, mostly running parallel to the spiral arms. Shklovsky's idea was that the interstellar material is sufficiently charged to ``cling'' to the field lines (figure 7.4). Then, if there were a little accumulation of the matter at some point, the gravity of the central plane of the Galaxy would pull the matter toward, it, bending the field lines even more. The more material accumulates, the stronger the gravitational force builds up, pulling even more matter into the forming cloud.

Figure 8.3: Shklovsky's Notion of the Formation of GMC's. In this figure, the central plane of the Galaxy represents the bottom of the potential well that the dust is, so to speak, sliding down. The mirror image of what is shown could be happening from ``below.''

Shklovsky typifies the kind of intellect that adds spice to science. He was like Hoyle in the respect that he was full of ideas, some of which were brilliant and enormously useful, while others were never taken seriously. It is interesting to contrast his approach to science with that of the Princeton astronomer Lyman Spitzer. Spitzer has always been at the forefront, but never over the edge. No one ever accused Spitzer of anything but competence and reliability, and those who wish to avoid being wrong about some astronomical point could do no better than to follow his lead. We are fortunate to have both kinds of minds at work on astronomical problems.

Spitzer has discussed a simple model for the formation of GMC's through the constructive coalescence of smaller clouds. His idea is that small clouds--possibly even some formed by Shklovsky's mechanism--would collide with one another. Some of these collisions would be destructive and others constructive. The constructive collisions would lead to larger clouds, and in this way a population of GMC's might be built up.

A great deal has been written on the subject of cloud formation, and we cannot do justice to the ideas of many of the workers in this burgeoning field. Modern workers do not stress the role of the magnetic field suggested by Shklovsky. They do emphasize of the importance of the self gravitation as a cloud grows in size. The more massive the cloud, the stronger it pulls in still more material.

What role might be played by dust? Molecular hydrogen, $\rm H_2$ will last only a short time in the general field between the stars. Its efficient formation requires dust as a catalyst. So the clouds won't be molecular in nature without dust. Should there be models that start with an accumulation of dust?

Without question, these GMC's are the site of star formation. The astronomer is now armed with impressive ways to probe these complex locations. At visual wavelengths, the clouds are essentially opaque, but in the infrared as well as with radio and microwave methods, it is possible to look inside these clouds to see very young stars as well as stars in the process of formation.

There is still a big gap in our understanding of how gas clouds form into stars. We are sure that it happens, but we can't yet make convincing models of what happens. Rather straightforward laws of physics tell us that as these clouds contract, they must spin more and more rapidly. We know that somehow, this rotational energy must be lost by the star. At the present time, we are just not sure how. Observations tell us that these young stars are surrounded by disk-like structures, not unlike our picture of the solar nebula. Probably much of the rotational energy of the star gets transferred to that disk, but we don't know what the mechanism is.

Figure 8.4: Protoplanetary Disks Observed by the Hubble Space Telescope. These disk like features were observed in the Orion Nebula. Each of the images is about 1700 AU across, so the disks are several hundred AU across, and are intermediate in size between the Kuiper and Oort comet clouds. Courtesy of M. J. McCaugrean, C. R. O'Dell, and NASA.

There is another aspect of the formation of young stars that is not at all understood. At some point as the disk is forming about a young object, jets form as though expelled from the rotational poles (axes) of the stars. It would be very useful if we could learn that these jets carry off the rotational energy of the star, but so far we don't know how this could be. The jets seem to be an important aspect of the formation of a new star, but we don't understand their function.

Once massive young stars begin to shine, they blow away the gas from their birthsites. All massive young stars have violent winds. If they rapidly run through their nuclear fuel and explode, they remove the gas even more quickly. The gas--and dust--return to the general interstellar medium, at least partially enriched in chemical elements synthesized in the very stars that caused the death of the cloud.


Summary

Stars are born in interstellar clouds. We can observe these clouds using the radiation emitted by interstellar molecules. The molecules are shielded from the general background radiation of the galaxy by dust. We are not sure of the composition of this dust, but much of it may be some form of carbon, probably graphite. Dust is thought to be formed in the atmospheres of very cool stars. There is something of a chicken and egg problem in that dust is required to shield ultraviolet radiation that would break up molecules, and also to form the most common molecule, $\rm H_2$. Yet some of the cool stars formed in these clouds may be the ideal sites for the formation of dust. Massive, hot stars may evolve and dissipate the clouds, either by their strong winds, their radiant energy, or by stellar explosions. Thus the clouds are dissipated, and much of their material is returned to the general interstellar medium in the form of gas and dust.

Chapter 9: The Early Solar Nebula


Thermodynamics and Chemical Equilibrium

One of the triumphs of nineteenth century science was the discipline of thermodynamics. The name of the discipline indicates that it has something to do with heat, or energy. One may take a broader view, and include such diverse matters as information, but we shall not do so here. Instead of attempting a general description of thermodynamics, we shall describe some specific situations that illustrate its principles. Ultimately, thermodynamics rests on laws, that is, postulates, which we may justify, but not prove in a fundamental way.

Suppose you had two large boxes of gas. We might suppose the gas is inert, like helium or argon. It is simplest if we think of an ideal gas, a material that does not exist, but is approximated for many purposes by inert gases. Let us suppose that both contain the same number of molecules but that one of the containers is hotter than the other. The pressure would then be higher in the hotter container, too. We could put the boxes in contact with one another, and open a door between them. Molecules from the hot box would stream into the cold one. We could put a paddle wheel in the stream, and use it to generate electricity, or wind a weight, or do a variety of other things.

Eventually, the gas in both boxes would reach the same temperature, pressure, and number of molecules. At this point, there would be no way we could use a paddle wheel to do any of the things we mentioned above, nor any other useful work. Although there would still be lots of energy in the boxes that had reached an equilibrium, we couldn't do any work with them.

The system with the two boxes at different temperatures has the ability to do work, while after the temperature between them has equalized, this capacity is no longer present. The German physicist Rudolph Claussius (1822-1888) proposed that the inability of a system to do work could be considered a property of the system. If we picture our system as a box of gas, then familiar properties would be temperature, volume, and density. Claussius's new property was given the name entropy, according to the popular science writer Isaac Asimov, ``...for no clear etymological reason''!

Entropy is measured in a funny way, that makes sense only after some study. Since it measures the inability to do work, you might expect it to be zero for the two-box gaseous system, after it has come to temperature equilibrium. But it turns out that entropy is measured in a way that it reaches a maximum for that temperature equilibrium. So we measure the ability of a system to do work by considering how far the its entropy is from its maximum value.

Ludwig Boltzmann (1844-1906) clarified the notion of entropy by postulating that it was a measure of the probability of the system. Consider our two boxes at different temperatures just after we open the door between them. According to Boltzmann's ideas, the configuration with a high temperature ($T_h$) in one box and a low one ($T_l$) in the other would not be very probable. Therefore, the initial entropy of the system would be relatively low. Then the system would adjust until the gas in both boxes had the same temperature. Boltzmann was able to show from the kinetic theory of gases that the state with the temperatures equal was much more probable than the original one. Then, the way entropy is defined, the entropy was higher.


The Laws of Thermodynamics

We are now in a position to state the three laws of thermodynamics. Laws one and three turn out to be much simpler to state than the second law, sometimes known as the law of increase of entropy. Most of this section will be spent with it.

The first law is sometimes called the conservation of energy. We can state it in a useful way with the help of a specific example. Suppose we dump some energy inside a balloon that is filled with an ideal gas. The first law then states that the energy of the gas will increase, but some of the energy must go into expanding the balloon. Therefore, at the end of the energy transfer, the gas will not be as hot as it would be if, for example, the gas were in an insulated box with a fixed volume.

Whenever energy is converted from one form to another, such as from heat to work, the first law just says we must be careful to consider all of the possible forms.

The second law deals with entropy. It says that all naturally occurring processes cause the entropy of a (closed) system to increase. Consider our two boxes of gas. Just before we open the door, there is a value for the entropy of the box with temperature $T_h$, and one for the box with temperature $T_l$. It turns out that the entropy of the hotter box is higher than that of the cooler box, but we don't need to worry about that here. The second law tells us that after we open the door, and the temperature equalizes, the entropy of the two boxes is greater than the sum of the initial entropies.

My introduction to thermodynamics was in 1953, when I took a course in physical chemistry at the University of Virginia. My lab partner said the second law was easy--it just said water ran downhill. While this is a considerable simplification, it is also very useful. We need to use the concept of potential energy, which we discussed in Section 4.8. Water that is uphill has a potential energy that is converted into kinetic energy as it flows downhill. Since this is a natural process, the second law implies that ``downhill'' water is more probable than ``uphill'' water. This is entirely in line with what we would expect.

Suppose we placed a droplet of water at the lip of a bowl. We would expect it to slide to the bottom, probably move uphill a little but end up right at the bottom. If there were no friction between the droplet and the bowl, the droplet would oscillate forever (assuming it didn't evaporate). We know that friction is a part of the real world, so the water would end up at the bottom of the bowl. The potential energy the drop had at the top of the bowl would be converted into heat, and the bowl would be just a little hotter than before the drop did its thing.

According to the second law, the entropy of drop + bowl would be higher at the end of this process than at the beginning. It shouldn't be too difficult to convince oneself that of all the possible things that might happen to a drop of water at the lip of a bowl, the most probable is that the water flow downhill--as my lab partner said.

As far as we know, there is no violation of any law of the elementary interaction of particles that says heat might be extracted from the molecules in the bowl and deposited in a small puddle of water in just such a way that the puddle would climb up the side of the bowl! Common sense tells us that this would not happen. Thermodynamics tells us something a little different. It says that it is over-, over-, over-, overwhelmingly more probable that the drop will settle at the bottom of the bowl than that it would spontaneously extract the kind of energy from the bowl to do the opposite thing.

Very simple physical systems behave in a way that doesn't depend on the direction of time. Consider the planets as just points, moving around a featureless, smooth sun. This is a simple system. We could make a movie of the system, and it would look natural whether we ran the film forward or backward. When there are many particles interacting in a complex way, we could tell immediately whether the film was being run forward or backward.

Consider the water drop and the bowl. The number of molecules involved in both the drop and the bowl are many powers of ten. There would be $2 \times 10^{19}$ water molecules in the drop were one millimeter in diameter. For matter involving large numbers of particles like this, time does have a unique direction. Time goes forward in such a way that the entropy of an isolated system will increase. Technically, this direction for time has only statistical validity. Practically, it is so overwhelmingly more probable that the entropy of a complex isolated system will spontaneously increase. There is only one exception, when the entropy has reached its maximum value. We couldn't use an isolated system that had reached its maximum entropy to tell time. This does not pose a problem for the world we live in.

Given our definition of entropy as a measure of the probability, we now briefly state the third law. At absolute zero, the entropy of pure crystalline substances is zero. This postulate allows chemists to make calculations of the entropy of substances from thermochemical measurements. With the zero point defined by the third law, such entropies are called absolute entropies. Another way to calculate them for simple systems is with the help of Boltzmann's definition of entropy in terms of probability. In practice, absolute entropies are often not needed because we can tell the ``downhill'' direction from changes in entropy.


The Gibbs Energy and the Direction of Chemical Reactions

In astronomy, we often have a temperature and pressure that is fixed by our model, and we want to know the relative amounts of chemicals that could be present. Chemists often have a similar problem. They mix two chemicals at the ambient temperature of their laboratory, and the want to know if they will react. Under these circumstances, it turns out to be more convenient to look at another thermodynamic property of a system known as the Gibbs energy.

The Gibbs energy is related to the entropy, but there is a negative sign. Consider a system with energy ($E$), volume ($V$), and entropy ($S$). Let us suppose the pressure ($P$) and temperature ($T$) have fixed values. Then the Gibbs energy ($G$) may be defined as $G = E + PV - TS$. Since we are trying to avoid equations in this book, we shall only point out that the minus sign leads us to expect the Gibbs energy to decrease for spontaneous processes, since the entropy must increase by the second law.

The energy and $PV$-terms in the definition $G$ make it simpler to use it than the entropy when a system might have to do work against a constant pressure. We could show this explicitly with a few equations. We will try to make this plausible below.

Consider a chemical reaction to form the simple diatomic radical CN. It is convenient to think of the two atoms as well as the molecule as being in the gas phase, (g).


\begin{displaymath}\rm C(g) + N(g) \longrightarrow CN(g).\end{displaymath}

In order to see if the reaction will proceed at a fixed temperature and pressure, we calculate the Gibbs energies for the substances on both sides of the arrow. We would do this for more complicated chemical equations too. If the Gibbs energy of the products (on the right) is lower than the Gibbs energy of the reactants (on the left), then the second law tells us the reaction will go from left to right as long as the temperature and pressure remain fixed.

On an atomic level, it is useful to look at the formation of this molecule with the help of a potential curve, similar to the ones we have used in looking at the behavior of nucleons (figure 9.3).

Figure 9.1: Schematic Potential Curve for the Diatomic Molecule CN. The vertical axis gives the potential energy as a function of the relative separation of the two atoms. The minimum of the curve shows the most probable separation of the atoms in a bound molecule, at low temperatures.

If we think of the potential curve of figure 9.3 as representing a classical system, then a marble placed on the curve would roll down the hill toward the minimum. We might think of the minimum as the most stable position, as we did for the water drop in the bowl. But the CN molecule is a very simple system, and as yet, we have no analogue of friction. So the ball would roll out of the minimum, toward even smaller separation, come to a halt at the same vertical position it started with, and roll back the way it came.

For the CN molecule to form, something must remove the relative energy with which the two atoms approached one another. This energy could be removed by the emission of a photon, or by a collision with a third atom, which could take up the excess energy. Given that these possibilities exist, we ask whether for a given temperature and pressure the CN would form. The answer depends on the values of the temperature and pressure.

Intuition tells us that if the temperature is high, the molecule is more likely to dissociate than form. The pressure is also relevant. In elementary chemistry we learned a useful rule called the law of mass action. That law said that if you stressed a system, the system would try to remove the stress. For example, if you increased the pressure of either C or N, the system would try to form the CN molecule to decrease the pressure. Conversely, if the pressure of C and/or N decreased, it would be more likely that the CN would dissociate.

There is another way to look at the effect of pressure. Pressure depends on both the temperature and the number density or the number of particles per unit volume. So if we fix the temperature, the pressure will increase or decrease directly with the number density. Clearly at high number densities, the atoms of C and N will collide more frequently with one another, and have the opportunity to form CN. This is what we also concluded from the law of mass action.

If CN dissociates, it forms two atoms, so there are two particles where previously there was only one. At a fixed temperature, which we assume, the two atoms supply exactly twice the pressure of the molecule. Since we also assume a constant pressure, the increase due to the extra molecule must be removed in some way, and that is done by an expansion of the gas. This is equivalent to doing work against pressure. When CN dissociates, we need to consider this extra work in addition to the energy it takes to roll the ball up the hill in figure 9.3. It is this extra work that is properly figured in the Gibbs energy for the constant temperature and pressure processes. Such conditions usually turn out to be of primary interest, both in laboratory chemistry, and astronomical applications.


Chemical Equilibrium and Condensation

Suppose the differences in the Gibbs energies of the two sides of a chemical equation is zero. The reaction can then proceed in either direction with equal probability. The influence of pressure that we discussed in the previous section now plays a key role. With the pressure fixed, it turns out to be possible to calculate the relative amounts of the reactants and products in a chemical equation.

Let us consider an especially reaction, where iron in the gas condenses, that is, becomes solid iron. We may write


\begin{displaymath}\rm Fe(g) \longrightarrow Fe(s),\end{displaymath}

where the `g' and `s' stand for gas and solid. The `reaction' is really only a phase change, but the laws of thermodynamics still apply. In this case, the pressure of gaseous iron is a function of the temperature only. Note that this is what the chemists call the partial pressure of iron, that is, the pressure that would hold if iron were the only gaseous species. The total gas pressure is the sum of all of the pressures of atoms and molecules in the gas phase.

There will always be some iron in the gaseous phase, no matter how low the temperature drops, but there is a relatively narrow temperature range where the pressure in the gas phase drops very rapidly, and for temperatures below this range, it is a good approximation to assume that all of the iron has passed into the solid phase. Workers in this field often take the temperature at which the partial pressure has dropped to half of its original value, and call it a 50% condensation temperature.

Figure 9.2: Condensation of Iron from the Gaseous Phase. The vertical axis gives the partial pressure of gaseous iron. Temperature is plotted on the horizontal axis, increasing to the right. A straight line divides regions of the plot where gaseous and solid iron are the dominant phases.

We can make a plot of the vapor pressure of iron as a function of temperature. For each value of the vapor pressure, there will be a temperature where half of the original vapor has condensed. The plot of figure 9.4 is made so that a straight line on the plot divides the region where the iron is primarily in the gas phase from the one where it is primarily solid. The figure shows that the condensation temperature depends on the partial pressure of gaseous iron. The law of mass action is a useful mnemonic here. If we consider a fixed temperature, then we would expect a high gas pressure would drive the ``reaction,'' $\rm Fe(g) \longrightarrow Fe(s),$ to the right. So the region where the solid phase dominates is above the dividing line of figure 9.4.

Calculations of condensation temperatures have mostly been carried out for cooling gases with the composition of the SAD. They show that the first appreciable solids to form are oxides of aluminum, calcium, and titanium. Two of these oxide are the minerals corundum ($\rm Al_2O_3$), and perovskite ($\rm CaTiO_3$). Materials that form solids at the highest temperatures are called refractory, while those that do not enter the solid phase until the temperature is low are said to be volatile. Corundum and perovskite are thus said to be refractory oxides.

The temperature at which a chemical element comes out of the gaseous phase depends critically on the compounds that it forms. Few elements condense as pure species, as we have indicated for iron. Detailed calculations show this is not a bad approximation for iron itself, but for an elements like aluminum or calcium, the assumption would be badly off. Both of these elements come out of the gas phase at high temperatures because they form refractory oxides.

It has nevertheless been the custom of workers in this field to assign condensation temperatures to elements with the understanding that these values depend on the overall composition assumed for the cooling gas.


History and Planetary Densities

Condensation schemes were calculated for the cooling solar nebula beginning with the remarkable work of Nobel Laureate Harold Clayton Urey (1893-1981). He simply applied the laws of thermodynamics that we have discussed in the previous sections to a cooling gas with the SAD composition.

We can simplify the outcome of his work as well as modern improvements of it with the help of three of our four ``elements'' from Chapter 2. Thermodynamics tells us the materials that condense from a gas with the SAD composition as the temperature drops. The first solids are metallic, with a small admixture of refractory oxides. At lower temperatures, rocky materials solidify, followed by the ices.

Urey thought the condensation sequences provided a theoretical basis for understanding the densities of planets at various distances from the sun as shown in table 2-1. In his original theory, there would be a different density for each distance from the sun. This density could be derived from the pressures and temperatures in the solar nebula. Urey pointed out that the decompressed densities should be used in this comparison, since the solids in the solar nebula formed first under low pressures.

The basic condensation model dominated thinking about the early chemistry of the planets--especially the terrestrial ones. It has long been granted that the Jovian planets may have formed largely as a result of their own gravitation. For Mercury through Mars, and perhaps even for some asteroids, condensation seemed the ideal tool to understand the bulk densities. There was a problem understanding what happened to the matter that did not condense, but it was assumed a vigorous wind from the young sun could remove the uncondensed material.

Even as the most detailed condensation calculations were being carried out evidence began to accumulate that seemed to contradict the notion. Space probes were able to sample the atmospheres of Venus and Mars. Certain meteorites were identified with reasonable probability as being fragments of the planet Mars or the asteroid Vesta. Many of these observations showed that there was no firm relation between the volatility of material and its location in the solar system.

A pillar of the condensation theory has always been the relatively high bulk density of Mercury. If this is not due to condensation at a high temperature, how might it be explained? A popular idea that persists since the 1980's is that Mercury once had a structure and composition similar to that of the earth and Venus. These twin planets have rocky mantles and metallic cores that start about halfway down toward their centers. In the case of Mercury, it is possible that much of the rocky mantle was blasted away by meteoroid impact.

We have known since the Mariner missions in the 1960's that meteoroid impacts have scarred the faces of Mars and other solid planetary surfaces. Geochemists realized that the earth must have grown by the accumulation of relatively small solid bodies because of the absence of heavy noble gases in its atmosphere. These gases, argon, krypton, and xenon are much too heavy to have simply boiled off into interplanetary space.

Throughout much of the twentieth century astronomical textbooks used this mechanism--boiling off--to explain why the earth and terrestrial planets did not have their SAD complements of hydrogen and helium. It is rather easy to show that most of these two light gases would leave the present earth's atmosphere. But it was never very clear how such demonstrations would apply to a hypothetical body that once had all of the hydrogen and helium that would complement the earth's metal and rock. Recall that about 2% by mass of the SAD is in elements other than hydrogen and helium. Consequently, a protoearth might have been 50 times its present mass, not as massive as Jupiter or Saturn, but more massive than Uranus and Neptune. It is at least problematical whether such a body might have lost its hydrogen and helium.

The geochemists had it right all along. The earth and terrestrial planets never did have their full complement of hydrogen, helium, and other volatiles from the SAD because they formed from small solid bodies mostly of metal and rock. In the heyday of condensation, it was thought the terrestrial planets had the composition of solids that could condense at various distances from the sun. Now, it seems any given inner planet might have been formed from meteoroids and planetesimals from a wide range of radii, perhaps stretching to the ``snow line'' some 3 to 4 AU from the sun.

It is now possible to follow by computer the motions of a large number of planetesimals. The American planetary scientist George Wetherill has made steadily improving calculations of this kind. He follows the paths of these bodies, and has plausible formulae to decide when a collision will result in a larger object or smaller fragments. He has made models of planetary systems that could form from the coagulation and fragmentation of such bodies, and some resemble the present solar system.

In the late 1970's two hypotheses involving the impact of a meteoroid or planetesimal on the earth became well known. Possibly the better known of these was for a relatively recent impact, some 65 million years ago. The geologist Walter Alvarez and his father Louis suggested that the extinction of the dinosaurs might be related to the aftermath of the impact of a large meteoroid. This hypothesis is now widely accepted, but with some reservations, because the extinctions could be shown to have taken place both before and after the impact. A great deal has been written on dinosaur extinction, so we need not pursue the matter here.

The astrophysicist A. G. W. Cameron has made many contributions to modern astronomy, in addition to his work on nucleosynthesis. In 1976 he and a collaborator wrote a paper in which it was claimed that the Moon formed from debris when a relatively large body, perhaps 0.1 earth masses, collided with the earth. We shall have more to say about this theory of the Moon in a later chapter. It is probably cited today as the most likely of many theories for the origin of the Moon.

Today, meteoroid and planetesimal impacts are being explored as relevant to the chemistry of planets in many ways that were not mentioned several decades ago. Perhaps a majority of planetary astronomers now prefer the ``blasting of the mantle'' hypothesis over condensation as the explanation to Mercury's high density.

Does this mean that Urey's idea was wrong? Most workers have been unwilling to abandon the notion completely. Surely processes in the early solar system were more complex than in the simplest equilibrium condensation model explored by Urey. But the planetesimals that formed the terrestrial planets were basically of metal and rocky compositions. And it is very tempting to conclude the lack of icy (volatile) material was due to condensation at a relatively high temperature.


Condensation Temperatures and Volatility

The University of Chicago has a long history of research on cosmochemistry of the solar system. A prime goal of much of their work was to determine the primitive composition of the solar nebula, what we have called the SAD. Gordon Goles, a graduate of Chicago, facetiously compared the attempt to find this primitive composition to the quest for the Holy Grail. The Chicago geochemist Edward Anders may have been the principal knight in this noble quest. Unlike some of the legendary Grail hunters, Anders has had good success. Most scientists accept his judgement that the SAD is best represented by a special type of meteorite.

There is a class of meteorites known as carbonaceous chondrites. While they are similar in composition, they are not identical. All are unusual in having some 10 to 100 times more carbon present than found in other meteorites. Recent work has revealed microscopic grains of diamond and silicon carbide that are ``presolar'' in their isotopic compositions. At the time Anders and his collaborators focused on a specific subclass of the carbonaceous chondrites as the best SAD emulators, the classification designations was slightly different from the ones used today.

Anders and his coworkers considered the content of obvious volatile substances in the three subclasses, called CI, CII, and CIII. Water is the most important volatile. A well-publicized plot from this group showed ratios of various elements in CII's and CIII's relative to the CI's. If the abundance of an element was the same in all three elements, the ratios would be unity. If there were less of an element than in the CI's the ratio would be less than unity. Figure 9.6 is a plot similar to the ones made by Anders and his coworkers, but with newer material.

Figure 9.3: Carbonaceous Chondrite Compositions Indicate Volatilities. In this diagram, CM meteorites are substituted for the CII's used by Anders, while CV's play the role of the CIII's. Elements with the same meteorite compositions as the volatile-rich CI carbonaceous chondrites plot at unity on the y-axis. Thus fluorine sulfur, chlorine, zinc, selenium, and bromine are among the more volatile elements. Scandium, titanium, iron, and lanthanum are involatile.

Let us grant that all of the volatiles should be greater in the CI's because that is where we find the most moisture (water). Then we can classify the relative volatility of various chemical elements by how far they deviate from the CI's composition. In the plot, the most volatile elements are displaced downward by the largest amounts. We might refer to the classification of the volatility of elements in this way as an empirical classification.

We can also compare our judgements of the volatility from figure 9.6 with theoretical calculations of the condensation temperatures for the elements. This is done in the top part of the figure. These temperatures were taken from a book on meteorites and the early solar system by the American cosmochemist John T. Wasson. Certain of the most volatile elements are not shown. Hydrogen itself, carbon, oxygen, and nitrogen are strongly depleted in all of these meteorites relative to the solar composition. We will call these elements supervolatiles.

Elements with the lowest condensation temperatures are the most volatile, by definition. Volatility as defined by calculation is a theoretical classification.

What we see from figure 9.6 is that there is a good, but though not perfect correspondences between the empirical and theoretical measures of volatility. Perfect correspondence is not to be expected. It is certain that the theoretical model does not and cannot take into account the complexities by which volatiles are depleted in the three classes of carbonaceous chondrites.

All meteorites have spent time as a part of larger parent bodies, most probably in the asteroid belt. We know that many underwent extensive heating and melting, but this was not true for the CI meteorites. Geologists can tell this by examining the texture of the materials under a special microscope. The carbonaceous chondrites were subjected to minimal heating, and the CI's least of all. We know this because of the difference in volatile contents between the CI's and the other classes.

All of the carbonaceous chondrites have lost supervolatiles. This is obvious, because they don't have their SAD complements of hydrogen and helium, not to mention carbon, oxygen, and nitrogen. It seems reasonable to attribute the volatile loss within the class of carbonaceous chondrites to heating that took place on the parent bodies themselves, or possibly within the solar nebula. The carbonaceous chondrites other than the CI's contain small, often rounded inclusions called chondrules. These chondrules do appear to be igneous in nature, and their volatile content is low. Generally speaking, among the carbonaceous meteorites, low overall volatile content is associated with a high proportion of these chondrules.

Figure 9.4: Logarithmic Abundances in the Sun and Type I Carbonaceous Chondrites. Meteoritic abundances from J. T. Wasson. Solar abundances from a tabulation by N. Grevesse.

The strongest support for Anders's selection of the Type I carbonaceous chondrites ``as the Holy Grail'' comes from a comparison of their abundances with determinations from the solar atmosphere. Figure 9.6 shows a plot of the logarithm of abundances determined in the sun and in the CI carbonaceous chondrites. The most volatile elements, carbon, nitrogen, oxygen, and the noble gases, are not plotted. For the remainder, the correspondence is remarkably good. Deviations from a straight line for only a few elements seems significant, and these may be due to errors.

The importance of finding the meteoritic Holy Grail is that the composition of these meteorites may be used to estimate SAD compositions for elements whose abundances are not well known in the sun. The CI's arguably provide the best values for SAD abundances of elements that are not supervolatile. It is common among earth scientists to assume that the nonvolatile composition of the mantles of the terrestrial planets closely resembles that of CI meteorites. Similarly, the cores of these planets are assumed to be chemically similar to meteoritic nickel-iron alloys.


Summary

Thermodynamics provides a basis for understanding many of the processes that take place in the early solar nebula. With the help of this discipline, we have been able to predict sequences in which solids would form from a gas with the SAD composition as the temperature drops.

It is probable that the temperature in the solar nebula decreased with distance from the sun. Materials that would be able to solidify first from a nebula with the SAD composition would be primarily metallic (first) and then rocky. At greater distances from the sun, ices would condense. The terrestrial planets undoubtedly owe their metallic and rocky natures to the early condensation of refractory or involatile materials. It is an open question whether the density decline from Mercury through Mars can be explained primarily in terms of condensation or if we have been fooled into thinking so by Mercury's high density. Many now think Mercury has a high density because its rocky mantle was blasted off by meteoroid impact.

A certain class of meteorites known as CI carbonaceous chondrites provides us with the best estimates of nonvolatile chemical elements in the sun. Most SAD abundances are based on analyses of these meteorites. Planetary abundances often assumed to be similar to those in various meteorites.

About this document ...

This document was generated using the LaTeX2HTML translator Version 2K.1beta (1.56)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -split +0 -no_subdir hist2.tex

The translation was initiated by Charles R. Cowley on 2003-05-01


next_inactive up previous
Charles R. Cowley 2003-05-01