When Science Inadvertently Aids an Enemy

September 25, 2001 

By GINA KOLATA


 

In 1975, a 30-year-old electrical engineering researcher at
Stanford had an idea for a new kind of cryptography. He
thought his method could give the public truly unbreakable
codes for their communications and other data, codes even
more powerful than those produced by the National Security
Agency, the nation's premier code-making and code- breaking
agency. 

"I remember thinking, `I've got a tiger by the tail,' "
said the scientist, Dr. Martin Hellman, now an emeritus
professor of electrical engineering at Stanford. Sure
enough, the N.S.A. soon stepped in and tried to control the
dissemination of the work. The new technology could be so
valuable to an enemy, the agency worried, that it should be
classified at birth. 

Dr. Hellman and others resisted. They said that the
agency's threats to restrict them were violations of
academic freedom, an unwarranted limitation on their right
to publish and discuss new ideas. And they worried about
the harm that could be done to national security if the
technology were kept from the private sector, leaving its
corporate data and private communications vulnerable to
eavesdroppers, terrorists and hostile governments. 

In the end, the academic scientists prevailed. They freely
published their research on how to make codes. And
companies soon sprang up to sell them. 

But now, in the aftermath of the terrorist attacks on New
York and the Pentagon, Dr. Hellman and others whose work
spawned the commercialization of high-level cryptography
are wondering if they did the right thing. They are haunted
by the idea that law enforcement agencies may have figured
out what the terrorists were planning, if only powerful
encryption techniques had been kept secret. 

And even if these particular terrorists relied on
hand-delivered notes and other forms of communication not
vulnerable to eavesdropping, what about other terrorists
who might be lurking? 

"Everything's changed," Dr. Hellman said. 

When the
technology was new, though, the issues looked clear-cut. Of
course cryptography should be easily available to everyone,
Dr. Hellman and other academic researchers argued.
Twenty-five years ago, he said, "I saw myself as Luke
Skywalker and the N.S.A. as Darth Vader." Several years
later, he said, "in a period of deep personal
introspection, I saw how human but how ridiculous that view
was." 

But now, he said, with the country searching for terror
suspects, the old questions have resurfaced with new force.
"I could say, I made exactly the right decision," he said,
"or I could say, If I could have envisioned this, I would
never have published those papers." 

It probably is too late to take back cryptography even if
people wanted to, experts say. But the "what if?" games of
history can leave an indelible mark on today's debates over
how, and whether, to control new technologies that can
transform balances of power and be used for good and evil. 

Already, arguments have begun over a very different
science, nanotechnology — the use of molecular machines to
build structures atom by atom. At least one nanotechnology
researcher, Dr. Ralph C. Merkle, started out in
cryptography — he worked with Dr. Hellman — and so, for
him, the new debate almost gives him a sense of déjà vu. 

Nanotechnology could produce computers so powerful that
today's machines would seem like clunky toys. And it could
produce weapons with the power of a supercomputer embedded
on the head of a bullet. 

It could provide tiny robots to go into blood vessels and
clean out plaque — or microscopic robots that could kill
instead of heal, and in ways far more predictable and
precise than anything envisioned in germ warfare. One
nanotechnology expert, Glenn H. Reynolds, a law professor
at the University of Tennessee, said that someday it might
even be used to make tiny robots that would lodge in
people's brains and make them truly love Big Brother. 

It is a technology whose consequences could be so
terrifying that one scientist, Dr. K. Eric Drexler, who saw
what it could do, at first thought that he should never
tell anyone what he was imagining, for fear that those
dreadful abuses might come to pass. 

Scientists struggling with the promise and peril of
nanotechnology say they look at the issues that arose with
cryptography and see chilling parallels. Once again, they
say, advances in technology are creating thorny moral
issues. And once again, there are no easy answers on how to
proceed. 

The cryptography story unfolded in the years just after
Watergate, when many academic scientists distrusted the
federal government, and at a time when banks and other
corporations were becoming concerned that their electronic
data were insecure. 

The academic scientists came up with a startling idea.
Encryption is a way of scrambling data and, for quite some
time, this has meant using mathematical formulas to
transform data — words or numbers, say, into strings of
unreadable data. Dr. Hellman and Dr. Merkle, who now works
for Zyvex, a nanotechnology company in Richardson, Tex.,
and Dr. Whitfield Diffie, a student of Dr. Hellman who now
works for Sun Microsystems, decided to exploit ancient and
unsolvable mathematical problems. If a code were designed
so that anyone who wanted to break it would have to solve
one of these problems, that code would be unbreakable. 

It was the start of what became a large-scale movement of
cryptography out of the secretive offices of the National
Security Agency and into the halls of academe and, later,
makers of commercial cryptographic systems. 

And despite its efforts to limit the spread of the new
encryption methods, the National Security Agency ultimately
was unsuccessful. Hundreds of codes are available and those
who want to keep data secret can do so. 

"The debate was always about drawing the line between the
ability to gather foreign intelligence and to protect U.S.
intelligence and the ability to protect the computation and
communications infrastructure and to have privacy," said
Dr. Leonard Adleman, a professor of computer science at the
University of Southern California. He is the "A" in the
R.S.A. code, perhaps the most successful of these codes,
which he invented with Dr. Ronald Rivest of the
Massachusetts Institute of Technology and Dr. Adi Shamir,
who is now at the Weizmann Institute of Science in Israel. 

The decision to make powerful codes available to the
general public allowed people to protect their privacy and
businesses to protect their records and communications from
prying eyes. But, Dr. Adleman noted: "We do give up some of
our ability to gather foreign intelligence. Everyone
regrets that that is a byproduct." 

The lesson from cryptography, however, is that troubling
questions about the possible uses of new technology will
not stop the technologies from being publicly studied, Dr.
Merkle said. Nor should those worries stop scientists from
making their findings public, he added. 

It is an issue that also came up in another context in
biology in the 1970's when scientists, worried less about
terrorists than natural disasters, asked whether they
should pursue a new technology. 

Molecular biologists had discovered how to remove genes
from one cell and put them in another. They realized that
they could do great good, turning bacteria into
drug-producing factories, for example. But they also
worried that they might accidentally create new and deadly
bacteria that could spread cancer as easily as a common
cold or create infections that no antibiotics could cure. 

Worried about the implications of their own work, a group
of these researchers met at Asilomar Conference Center in
Pacific Grove, Calif., in 1975 to discuss how to proceed.
They decided to hold off on their experiments until they
could prove the work was safe. In just a few years, their
fears were assuaged and the work continued. 

With the Asilomar discussions as a model, a group of
scientists and others who worried about nanotechnology
formed a nonprofit institute, the Foresight Institute based
in Los Altos, Calif. Its goal is to prepare society for the
transforming powers of new technologies, and, in
particular, of nanotechnology. Dr. Merkle is on its board
of directors. 

The institute's chairman, Dr. Drexler, originally thought
that the best thing to do would be never to disclose
nanotechnology's darker possibilities for fear it might
give terrorists ideas. But he soon realized that if he
could think of these abuses, others could too. So he
decided to try to help society prepare for the good uses of
the technology and to protect itself against its evil use. 

Dr. Drexler, Dr. Merkle and others at the Foresight
Institute argue that openness is critical toward developing
nanotechnology safely. 

"There's an argument that perhaps we could simply close our
eyes to new technology," Dr. Merkle said. "Occasionally,
people argue that if new technologies pose new risks we
should tell people they should not develop them." But then,
he said, society would be worse off. "Not only do we lose
the benefits of the new technology, but we also — and more
importantly — fail to understand what the new technology
means," Dr. Merkle said. "Then how can we defend ourselves
if someone else develops them?" 

Professor Reynolds, a member of the board of directors of
the Foresight Institute, agreed. "Barring some new
scientific law that makes nanotechnology infeasible, you're
going to have it sooner or later," he said. "There is a lot
of potential for abuse," he added, but if a ban on the
research were instituted, society would be "at the mercy of
whoever breaks the ban." Those who broke the ban would have
the weapons; the rest of the world would have no antidotes.


And yet, he added, the events of Sept. 11 show that
opponents may not need high-tech weapons to do grave harm. 

"We spend a lot of time worrying about extremely
sophisticated threats," he said. "But less sophisticated
threats can slip under the radar. People who want to hurt
you can find a way to do it." 

Nonetheless, said Dr. Adleman, the troubling questions that
he faced two decades ago on controlling encryption remain.
And the cryptography debates offer lessons for the
development of other technologies. 

"Now is an appropriate time to see if we made the right
choice," Dr. Adleman said. "The issue remains the same:
Where do you draw that line?" 

It is not easy. "Who's smart enough to make these
decisions?" Dr. Adleman asked. "You need the wisdom of
Solomon."


Copyright 2001 The New York Times Company