CHAPTER FIVE

“Reg, transporting really is the safest way to travel.” Geordi LaForge to Lieutenant Reginald Barclay,

in “Realm of Fear”

Life imitates art. Lately, I keep hearing the same question: “Atoms or bitswhere does the future lie?” Thirty years ago, Gene Rod-denberry dealt with this same speculation, driven by another imperative. He had a beautiful design for a starship, with one small problem: like a penguin in the water, the Enterprise could glide smoothly through the depths of space, but like a penguin on the ground it clearly would have trouble with its footing if it ever tried to land. More important perhaps, the meager budget for a weekly television show precluded landing a huge starship every week.

How then to solve this problem? Simple: make sure the ship would never need to land. Find some other way to get the crew members from the ship to a planet's surface. No sooner could you say, “Beam me up” than the transporter was born.

Perhaps no other piece of technology, save for the warp drive, so colors every mission of every starship of the Federation. And even those who have never watched a Star Trek episode recognize the magic phrase on the preceding page. It has permeated our popular culture. I recently heard about a young man who, while inebriated, drove through a red light and ran into a police cruiser that happened to be lawfully proceeding through the intersection. At his hearing, he was asked if he had anything to say. In well-founded desperation, he replied, “Yes, your honor,” stood up, took out his wallet, flipped it open, and muttered into it, “Beam me up, Scotty!”

The story is probably apochryphal, but it is testimony to the impact that this hypothetical technology has had on

our culturean impact all the more remarkable given that probably no single piece of science fiction technology aboard the Enterprise is so utterly implausible. More problems of practicality and principle would have to be overcome to create such a device than you might imagine. The challenges involve the whole spectrum of physics and mathematics, including information theory, quantum mechanics, Einstein's relation between mass and energy, elementary particle physics, and more.

Which brings me to the atoms versus bits debate. The key question the transporter forces us to address is the following: Faced with the task of moving, from the ship to a planet's surface, roughly 10 28 (1 followed by 28

zeroes) atoms of matter combined in a complex pattern to make up an individual human being, what is the fastest and most efficient way to do it? This is a very timely question, because we are facing exactly the same quandary as we consider how best to disseminate the complex pattern of roughly 10 26 atoms in an average paperback book. A potentially revolutionary concept, at least so claimed by various digital-media gurus, is that the atoms themselves are often secondary. What matters more are the bits.

Consider, for example, a library book. A library buys one copyor, for some lucky authors, several copiesof a book, which it stores and lends out for use by one individual at a time. However, in a digital library the same information can be stored as bits. A bit is a 1 or a 0, which is combined in groups of eight, called bytes, to represent words or numbers. This information is stored in the magnetic memory cores of computers, in which each bit is represented as either a magnetized (1) or unmagnetized (0) region. Now an arbitrarily large number of users can access the same memory location on a computer at essentially the same time, so in a digital library every single person on Earth who might otherwise have to buy a book can read it from a single source. Clearly, in this case, having on hand the actual atoms that make up the book is less significant, and certainly less efficient, than storing the bits (although it will play havoc with authors' royalties).

So, what about people? If you are going to move people around, do you have to move their atoms or just their information? At first you might think that moving the information is a lot easier; for one thing, information can travel at the speed of light. However, in the case of people, you have two problems you don't have with books: first, you have to extract the information, which is not so easy, and then you have to recombine it with matter. After all, people, unlike books, require the atoms.

The Star Trek writers seem never to have got it exactly clear what they want the transporter to do. Does the transporter send the atoms and the bits, or just the bits? You might wonder why I make this point, since the Next Generation Technical Manual describes the process in detail: First the transporter locks on target. Then it scans the image to be transported, “dematerializes” it, holds it in a “pattern buffer” for a while, and then transmits the “matter stream,” in an “annular confinement beam,” to its destination. The transporter thus apparently sends out the matter along with the information.

The only problem with this picture is that it is inconsistent with what the transporter sometimes does. On at least two well-known occasions, the transporter has started with one person and beamed up two. In the famous classic episode “The Enemy Within,” a transporter malfunction splits Kirk into two different versions of himself, one good and one evil. In a more interesting, and permanent, twist, in the Next Generation episode “Second Chances,” we find out that Lieutenant Riker was earlier split into two copies during transportation from the planet Nervala IV to the Potemkin. One version returned safely to the Potemkin and one was reflected back to the planet, where he lived alone for eight years.

If the transporter carries both the matter stream and the information signal, this splitting phenomenon is impossible. The number of atoms you end up with has to be the same as the number you began with. There is no possible way to replicate people in this manner. On the other hand, if only the information were beamed up, one could imagine combining it with atoms that might be stored aboard a star-ship and making as many copies as you wanted of an individual.

A similar problem concerning the matter stream faces us when we consider the fate of objects beamed out into space as “pure energy.” For example, in the Next Generation episode “Lonely among Us,” Picard chooses at one point to beam out as pure energy, free from the constraints of matter. After this proves a dismal and dangerous experience, he manages to be retrieved, and his corporeal form is restored from the pattern buffer. But if the matter stream had been sent out into space, there would have been nothing to restore at the end.

So, the Star Trek manual notwithstanding, I want to take an agnostic viewpoint here and instead explore the myriad problems and challenges associated with each possibility: transporting the atoms or the bits.

WHEN A BODY HAS NO BODY: Perhaps the most fascinating question about beamingone that is usually not even addressedis, What comprises a human being? Are we merely the sum of all our atoms? More precisely, if I were to re-create each atom in your body, in precisely the same chemical state of excitation as your atoms are in at this moment, would I produce a functionally identical person who has exactly all your memories, hopes, dreams, spirit? There is every reason to expect that this would be the case, but it is worth noting that it flies in the face of a great deal of spiritual belief about the existence of a “soul” that is somehow distinct from one's body. What happens when you die, after all? Don't many religions hold that the “soul” can exist after death? What then happens to the soul during the transport process? In this sense, the transporter would be a wonderful experiment in spirituality. If a person were beamed aboard the Enterprise and remained intact and observably unchanged, it would provide dramatic evidence that a human being is no more than the sum of his or her parts, and the demonstration would directly confront a wealth of spiritual beliefs.

For obvious reasons, this issue is studiously avoided in Star Trek. However, in spite of the purely physical nature of the dematerialization and transport process, the notion that some nebulous “life force” exists beyond the confines of the body is a constant theme in the series. The entire premise of the second and third Star Trek movies, The Wrath of Khan and The Search for Spock, is that Spock, at least, has a “katra” a living spirit which can exist apart from the body. More recently, in the Voyager series episode “Cathexis,” the “neural energy”akin to a life forceof Chakotay is removed and wanders around the ship from person to person in an effort to get back “home.”

I don't think you can have it both ways. Either the “soul,” the “katra,” the “life force,” or whatever you want to call it is part of the body, and we are no more than our material being, or it isn't. In an effort not to offend religious sensibilities, even a Vulcan's, I will remain neutral in this debate. Nevertheless, I thought it worth pointing out before we forge ahead that even the basic premise of the transporterthat the atoms and the bits are all there isshould not be taken lightly.

THE PROBLEM WITH BITS: Many of the problems I will soon discuss could be avoided if one were to give up the requirement of transporting the atoms along with the information. After all, anyone with access to the Internet knows how easy it is to transport a data stream containing, say, the detailed plans for a new car, along with photographs. Moving the actual car around, however, is nowhere near as easy. Nevertheless, two rather formidable problems arise even in transporting the bits. The first is a familiar quandary, faced, for example, by the last people to see Jimmy Hoffa alive: How are we to dispose of the body? If just the information is to be transported, then the atoms at the point of origin must be dispensed with and a new set collected at the reception point. This problem is quite severe. If you want to zap 10 28 atoms, you have quite a challenge on your hands. Say, for example, that you simply want to turn all this material into pure energy. How much energy would result? Well, Einstein's formula E = mc 2 tells us. If one suddenly transformed 50 kilograms (a light adult) of material into energy, one would release the energy equivalent of somewhere in excess of a thousand 1-megaton hydrogen bombs. It is hard to imagine how to do this in an environmentally friendly fashion.

There is, of course, another problem with this procedure. If it is possible, then replicating people would be trivial. Indeed, it would be much easier than transporting them, since the destruction of the original subject would then not be necessary. Replication of inanimate objects in this manner is something one can live with, and indeed the crew members aboard starships do seem to live with this. However, replicating living human beings would certainly be cause for trouble (ˆ la Riker in “Second Chances”). Indeed, if recombinant DNA research today has raised a host of ethical issues, the mind boggles at those which would be raised if complete individuals, including memory and personality, could be replicated at will. People would be like computer programs, or drafts of a book kept on disk. If one of them gets damaged or has a bug, you could simply call up a backup version.

OK, KEEP THE ATOMS: The preceding arguments suggest that on both practical and ethical grounds it might be better to imagine a transporter that carries a matter stream along with the signal, just as we are told the Star Trek transporters do. The problem then becomes, How do you move the atoms? Again, the challenge turns out to be energetics, although in a somewhat more subtle way.

What would be required to “dematerialize” something in the transporter? To answer this, we have to consider a

little more carefully a simpler question: What is matter? All normal matter is made up of atoms, which are in turn made up of very dense central nuclei surrounded by a cloud of electrons. As you may recall from high school chemistry or physics, most of the volume of an atom is empty space. The region occupied by the outer electrons is about ten thousand times larger than the region occupied by the nucleus.

Why, if atoms are mostly empty space, doesn't matter pass through other matter? The answer to this is that what makes a wall solid is not the existence of the particles but of the electric fields between the particles. My hand is stopped from going through my desk when I slam it down primarily because of the electric repulsion felt by the electrons in the atoms in my hand due to the presence of the electrons in the atoms of the desk and not because of the lack of available space for the electrons to move through.

These electric fields not only make matter corporeal, in the sense of stopping objects from passing through one

another, but they also hold the matter together. To alter this normal situation, one must therefore overcome the electric forces between atoms. Overcoming these forces will require work, which takes energy. Indeed, this is how all chemical reactions work. The configuration of individual sets of atoms and their binding to one another are altered through the exchange of energy. For example, if one injects some energy into a mixture of ammonium nitrate and fuel oil, the molecules of the two materials can rearrange, and in the process the “binding energy” holding the original materials can be released. This release, if fast enough, will cause a large explosion.

The binding energy between atoms is, however, minuscule compared to the binding energy of the particles protons and neutrons that make up the incredibly dense nuclei of atoms. The forces holding these particles together in a nucleus result in binding energies that are millions of times stronger than the atomic binding energies. Nuclear reactions therefore release significantly more energy than chemical reactions, which is why nuclear weapons are so powerful.

Finally, the binding energy that holds together the elementary particles, called quarks, which make up the protons and neutrons themselves is yet larger than that holding together the protons and neutrons in nuclei. In fact, it is currently believedbased on all calculations we can perform with the theory describing the interactions of quarks that it would take an infinite amount of energy to completely separate the quarks making up each proton or neutron.

Based on this argument, you might expect that breaking matter completely apart into quarks, its fundamental constituents, would be impossibleand it is, at least at room temperature. However, the same theory that describes the interactions of quarks inside protons and neutrons tells us that if we were to heat up the nuclei to about 1000 billion degrees (about a million times hotter than the temperature at the core of the Sun), then not only would the quarks inside lose their binding energies but at around this temperature matter will suddenly lose almost all of its mass. Matter will turn into radiationor, in the language of our transporter, matter will dematerialize.

So, all you have to do to overcome the binding energy of matter at its most fundamental level (indeed, at the level referred to in the Star Trek technical manual) is to heat it up to 1000 billion degrees. In energy units, this implies providing about 10 percent of the rest mass of protons and neutrons in the form of heat. To heat up a sample the size of a human being to this level would require therefore, about 10 percent of the energy needed to annihilate the materialor the energy equivalent of a hundred 1-megaton hydrogen bombs.

One might suggest, given this daunting requirement, that the scenario I have just described is overkill. Perhaps we don't have to break down matter to the quark level. Perhaps a dematerialization at the proton and neutron level, or maybe even the atomic level, is sufficient for the purposes of the transporter. Certainly the energy requirements in this case would be vastly less, even if formidable. Unfortunately, hiding this problem under the rug exposes one that is more severe. For once you have the matter stream, made now of individual protons and neutrons and electrons, or perhaps whole atoms, you have to transport itpresumably at a significant fraction of the speed of light.

Now, in order to get particles like protons and neutrons to move near the speed of light, one must give them an energy comparable to their rest-mass energy. This turns out to be about ten times larger than the amount of energy required to heat up and “dissolve” the protons into quarks. Nevertheless, even though it takes more energy per particle to accelerate the protons to near light speed, this is still easier to accomplish than to deposit

and store enough energy inside the protons for long enough to heat them up and dissolve them into quarks. This is why today we can build, albeit at great cost, enormous particle acceleratorslike Fermilab's Tevatron, in Batavia, Illinoiswhich can accelerate individual protons up to more than 99.9 percent of the speed of light but we have not yet managed to build an accelerator that can bombard protons with enough energy to “melt” them into their constituent quarks. In fact, it is one of the goals of physicists designing the next generation of large acceleratorsincluding one device being built at Brookhaven National Laboratory, on Long Islandto actually achieve this “melting” of matter.

Yet again I am impressed with the apt choice of terminology by the Star Trek writers. The melting of protons into quarks is what we call in physics a phase transition. And lo and behold, if one scours the Next Generation Technical Manual for the name of the transporter instruments that dematerialize matter, one finds that they are called “phase transition coils.”

So the future designers of transporters will have a choice. Either they must find an energy source that will temporarily produce a power that exceeds the total power consumed on the entire Earth today by a factor of about 10,000, in which case they could make an atomic “matter stream” capable of moving along with the information at near the speed of light, or they could reduce the total energy requirements by a factor of 10 and discover a way to heat up a human being instantaneously to roughly a million times the temperature at the center of the Sun.

IF THIS IS THE INFORMATION SUPERHIGHWAY, WE'D BETTER GET IN THE FAST LANE: As I write this on my Power PC-based home computer, I marvel at the speed with which this technology has developed since I bought my first Macintosh a little over a decade ago. I remember that the internal memory in that machine was 128 kilobytes, as opposed to the 16 megabytes in my current machine and the 128 megabytes in the fast workstation I have in my office in Case Western Reserve's Physics Department. Thus, in a decade my computer internal-memory capabilities have increased by a factor of 1000! This increase has been matched by an increase in the capacity of my hard-drive memory. My first machine had no hard drive at all and thus had to work from floppy disks, which held 400 kilobytes of information. My present home machine has a 500-megabyte hard driveagain, an increase of more than a factor of 1000 in my storage capabilities. The speed of my home system has also greatly increased in the last decade. For doing actual detailed numerical calculations, I estimate that my present machine is almost a hundred times faster than my first Macintosh. My office workstation is perhaps ten times faster still, performing close to half a billion instructions per second!

Even at the cutting edge, the improvement has been impressive. The fastest computers used for general-purpose computing have increased in speed and memory capability by a factor of about 100 in the past decade. And I am not including here computers built for special-purpose work: these little marvels can have effective speeds exceeding tens of billions of instructions per second. In fact, it has been shown that in principle certain special- purpose devices must be built using biological, DNA-based systems, which could be orders of magnitude faster.

One might wonder where all this is heading, and whether we can extrapolate the past rapid growth to the future. Another valid question is whether we need to keep up this pace. I find already that the rate-determining step in the information superhighway is the end user. We can assimilate only so much information. Try surfing the Internet for a few hours, if you want a graphic example of this. I often wonder why, with the incredible power at my disposal, my own productivity

has not increased nearly as dramatically as my computer's. I think the answer is clear. I am not limited by my computer's capabilities but by my own capabilities. It has been argued that for this reason computing machines could be the next phase of human evolution. It is certainly true that Data, even without emotions, is far superior to his human crewmates in most respects. And, as determined in “The Measure of a Man,” he is a genuine life-form.

But I digress. The point of noting the growth of computer capability in the last decade is to consider how it compares with what we would need to handle the information storage and retrieval associated with the transporter. And of course, it doesn't come anywhere close.

Let's make a simple estimate of how much information is encoded in a human body. Start with our standard estimate of 10 28 atoms. For each atom, we first must encode its location, which requires three coordinates (the x, y, and z positions). Next, we would have to record the internal state of each atom, which would include things like

which energy levels are occupied by its electrons, whether it is bound to a nearby atom to make up a molecule, whether the molecule is vibrating or rotating, and so forth. Let's be conservative and assume that we can encode all the relevant information in a kilobyte of data. (This is roughly the amount of information on a double-spaced typewritten page.) That means we would need roughly 10 28 kilobytes to store a human pattern in the pattern

buffer. I remind you that this is a 1 followed by 28 zeros.

Compare this with, say, the total information stored in all the books ever written. The largest libraries contain several million volumes, so let's be very generous and say that there are a billion different books in existence (one written for every five people now alive on the planet). Say each book contains the equivalent of a thousand typewritten pages of information (again on the generous side)or about a megabyte. Then all the information in all the books ever written would require about 10 12 , or about a million million, kilobytes of storage. This is about sixteen orders of magnitudeor about one ten-millionth of a billionthsmaller than the storage capacity needed to record a single human pattern! When numbers get this large, it is difficult to comprehend the enormity of the task. Perhaps a comparison is in order. The storage requirements for a human pattern are ten thousand times as large, compared to the information in all the books ever written, as the information in all the books ever written is compared to the information on this page.

Storing this much information is, in an understatement physicists love to use, nontrivial. At present, the largest commercially available single hard disks store about 10 gigabytes, or 10,000 thousand megabytes, of information. If each disk is about 10 cm thick, then if we stacked all the disks currently needed to store a human pattern on top of one another, they would reach a third of the way to the center of the galaxyabout 10,000 light-years, or about 5 years' travel in the Enterprise at warp 9!

Retrieving this information in real time is no less of a challenge. The fastest digital information transfer mechanisms at present can move somewhat less than about 100 megabytes per second. At this rate, it would take about 2000 times the present age of the universe (assuming an approximate age of 10 billion years) to write the data describing a human pattern to tape! Imagine then the dramatic tension: Kirk and McCoy have escaped to the surface of the penal colony at Rura Penthe. You don't have even the age of the universe to beam them back, but rather just seconds to transfer a million billion billion megabytes of information in the time it takes the jailor to aim his weapon before firing.

I think the point is clear. This task dwarfs the ongoing Human Genome Project, whose purpose is to scan and record the complete human genetic code contained in microscopic strands of human DNA. This is a multibillion- dollar endeavor, being carried out over at least a decade and requiring dedicated resources in many laboratories around the world. So you might imagine that I am mentioning it simply to add to the transporter-implausibility checklist. However, while the challenge is daunting, I think this is one area that could possibly be up to snuff in the twenty-third century. My optimism stems merely from extrapolating the present growth rate of computer technology. Using my previous yardstick of improvement in storage and speed by a factor of 100 each decade, and dividing it by 10 to be conservativeand given that we are about 21 powers of 10 short of the mark now one might expect that 210 years from now, at the dawn of the twenty-third century, we will have the computer technology on hand to meet the information-transfer challenge of the transporter.

I say this, of course, without any idea of how. It is clear that in order to be able to store in excess of 10 28 kilobytes of information in any human-scale device, each and every atom of the device will have to be exploited as a memory site. The emerging notions of biological computers, in which molecular dynamics mimics digital logical processes and the 10 25 or so particles in a macroscopic sample all act simultaneously, seem to me to be the most

promising in this regard.

I should also issue one warning. I am not a computer scientist. My cautious optimism may therefore merely be a reflection of my ignorance. However, I take some comfort in the example of the human brain, which is light-years ahead of any existing computational system in complexity and comprehensiveness. If natural selection can develop such a fine information storage and retrieval device, I believe that there is still a long way we can go.

THAT QUANTUM STUFF: For some additional cold water of reality, two words: quantum mechanics. At the microscopic level necessary to scan and re-create matter in the transporter, the laws of physics are governed by the strange and exotic laws of quantum mechanics, whereby particles can behave like waves and waves can behave like particles. I am not going to give a course in quantum mechanics here. However, the bottom line is as

follows: on microscopic scales, that which is being observed and that which is doing the observation cannot be separated. To make a measurement is to alter a system, usually forever. This simple law can be parameterized in many different ways, but is probably most famous in the form of the Heisenberg uncertainty principle. This fundamental lawwhich appears to do away with the classical notion of determinism in physics, although in fact at a fundamental level it doesn'tdivides the physical world into two sets of observable quantities: the yin and the yang, if you like. It tells us that no matter what technology is invented in the future, it is impossible to measure certain combinations of observables with arbitrarily high accuracy. On microscopic scales, one might measure the position of a particle arbitrarily well. However, Heisenberg tells us that we then cannot know its velocity (and hence precisely where it will be in the next instant) very well at all. Or, we might ascertain the energy state of an atom with arbitrary precision. Yet in this case we cannot determine exactly how long it will remain in this state. The list goes on.

These relations are at the heart of quantum mechanics, and they will never go away. As long as we work on scales where the laws of quantum mechanics applywhich, as far as all evidence indicates, is at least larger than the scale at which quantum gravitational effects become significant, or at about 10 -33 cmwe are stuck with them.

There is a slightly flawed yet very satisfying physical argument that gives some heuristic understanding of the uncertainty principle. Quantum mechanics endows all particles with a wavelike behavior, and waves have one striking property: they are disturbed only when they encounter objects larger than their wavelength (the distance between successive crests). You have only to observe water waves in the ocean to see this behavior explicitly. A pebble protruding from the surface of the water will have no effect on the pattern of the surf pounding the shore. However, a large boulder will leave a region of calm water in its wake.

So, if we want to “illuminate” an atomthat is, bounce light off it so that we can see where it iswe have to shine light of a wavelength small enough so that it will be disturbed by the atom. However, the laws of quantum mechanics tell us that waves of light come in small packets, or quanta, which we call photons (as in starship “photon torpedoes,” which in fact are not made of photons). The individual photons of each wavelength have an energy inversely related to their wavelength. The greater the resolution we want, the smaller the wavelength of light we must use. But the smaller the wavelength, the larger the energy of the packets. If we bombard an atom with a high-energy photon in order to observe it, we may ascertain exactly where the atom was when the photon hit it, but the observation process itself that is, hitting the atom with the photonwill clearly transfer significant energy to the atom, thus changing its speed and direction of motion by some amount.

It is therefore impossible to resolve atoms and their energy configurations with the accuracy necessary to re- create exactly a human pattern. Residual uncertainty in some of the observables is inevitable. What this would mean for the accuracy of the final product after transport is a detailed biological question I can only speculate upon.

This problem was not lost on the Star Trek writers, who were aware of the inevitable constraints of quantum mechanics on the transporter.

Possessing something physicists can't usually call uponnamely, artistic licensethey introduced “Heisenberg compensators,” which allow “quantum resolution” of objects. When an interviewer asked the Star Trek technical consultant Michael Okuda how Heisenberg compensators worked, he merely replied, “Very well, thank you!”

Heisenberg compensators perform another useful plot function. One may wonder, as I have, why the transporter is not also a replicator of life-forms. After all, a replicator exists aboard starships that allows glasses of water or wine to magically appear in each crew member's quarters on voice command. Well, it seems that replicator technology can operate only at “molecular-level resolution” and not “quantum resolution.” This is supposed to explain why replication of living beings is not possible. It may also explain why the crew continually complains that the replicator food is never quite the same as the real thing, and why Riker, among others, prefers to cook omelets and other delicacies the old-fashioned way.

SEEING IS BELIEVING: One last challenge to transportingas if one more were needed. Beaming down is hard enough. But beaming up may be even more difficult. In order to transport a crew member back to the ship, the sensors aboard the Enterprise have to be able to spot the crew member on the planet below. More than that, they

need to scan the individual prior to dematerialization and matter-stream transport. So the Enterprise must have a telescope powerful enough to resolve objects on and often under a planet's surface at atomic resolution. In fact, we are told that normal operating range for the transporter is approximately 40,000 kilometers, or about three times the Earth's diameter. This is the number we shall use for the following estimate.

Everyone has seen photographs of the domes of the world's great telescopes, like the Keck telescope in Hawaii

(the world's largest), or the Mt. Palomar telescope in California. Have you ever wondered why bigger and bigger telescopes are designed? (It is not just an obsession with bignessas some people, including many members of Congress, like to accuse science of.) Just as larger accelerators are needed if we wish to probe the structure of matter on ever smaller scales, larger telescopes are needed if we want to resolve celestial objects that are fainter and farther away. The reasoning is simple: Because of the wave nature of light, anytime it passes through an opening it tends to diffract, or spread out a little bit. When the light from a distant point source goes through the telescopic lens, the image will be spread out somewhat, so that instead of seeing a point source, you will see a small, blurred disk of light. Now, if two point sources are closer together across the line of sight than the size of their respective disks, it will be impossible to resolve them as separate objects, since their disks will overlap in the observed image. Astronomers call such disks “seeing disks.” The bigger the lens, the smaller the seeing disk. Thus, to resolve smaller and smaller objects, telescopes must have bigger and bigger lenses.

There is another criterion for resolving small objects with a telescope. The wavelength of light, or whatever radiation you use as a probe, must be smaller than the size of the object you are trying to scan, according to the argument I gave earlier. Thus, if you want to resolve matter on an atomic scale, which is about several billionths of a centimeter, you must use radiation that has a wavelength of less than about one-billionth of a centimeter. If you select electromagnetic radiation, this will require the use of either X rays or gamma rays. Here a problem arises right away, because such radiation is harmful to life, and therefore the atmosphere of any Class M planet will filter it out, as our own atmosphere does. The transporter will therefore have to use nonelectromagnetic probes, like neutrinos or gravitons. These have their own problems, but enough is enough....

In any case, one can perform a calculation, given that the Enterprise is using radiation with a wavelength of less than a billionth of a centimeter and scanning an object 40,000 kilometers away with atomic-scale resolution. I find that in order to do this, the ship would need a telescope with a lens greater than approximately 50,000 kilometers in diameter! Were it any smaller, there would be no possible way even in principle to resolve single atoms. I think it is fair to say that while the Enterprise-D is one large mother, it is not that large.

As promised, thinking about transporters has led us into quantum mechanics, particle physics, computer science, Einstein's mass-energy relation, and even the existence of the human soul. We should therefore not be too disheartened by the apparent impossibility of building a device to perform the necessary functions. Or, to put it less negatively, building a transporter would require us to heat up matter to a temperature a million times the temperature at the center of the Sun, expend more energy in a single machine than all of humanity presently uses, build telescopes larger than the size of the Earth, improve present computers by a factor of 1000 billion billion, and avoid the laws of quantum mechanics. It's no wonder that Lieutenant Barclay was terrified of beaming! I think even Gene Roddenberry, if faced with this challenge in real life, would probably choose instead to budget for a landable starship.