|Databases selected: Multiple databases...|
|10 EMERGING TECHNOLOGIES THAT WILL CHANGE YOUR WORLD|
|Gregory T Huang, Lauren Gravitz, Ivan Amato, Wade Roush, et al. Technology Review. Cambridge: Feb 2004. Vol. 107, Iss. 1; pg. 32|
|Author(s):||Gregory T Huang, Lauren Gravitz, Ivan Amato, Wade Roush, et al|
|Article types:||General Information|
|Publication title:||Technology Review. Cambridge: Feb 2004. Vol. 107, Iss. 1; pg. 32|
|ProQuest document ID:||536101531|
|Text Word Count||7697|
|More Like This »Show Options for finding similar articles|
|Full Text (7697 words)|
Copyright Technology Review, Inc. Feb 2004
Yuqing Gao is bilingual-and so is her computer. At IBM's Watson Research Center in Yorktown Heights, NY, the computer scientist, role-playing a doctor, speaks Mandarin Chinese into a personal digital assistant. In a few seconds, a pleasant female voice emanating from the device asks, in English, "What are your symptoms?" Gao's system, designed to help doctors communicate with patients, can be extended to other languages and situations. The ultimate goal, she says, is to develop "universal translation" software that gleans meaning from phrases in one language and conveys it in any other language, enabling people from different cultures to communicate.
Gao's work is at the forefront of escalating efforts to use mathematical models and naturallanguage-processing techniques to make computerized translation more accurate and efficient, and more adaptable to new languages. Distinct from speech recognition and synthesis, the technology behind universal translation has matured in recent years, driven in part by global business and security needs. "Advances in automatic learning, computing power, and available data for translation are greater than we've seen in the history of computer science," says Alex Waibel, associate director of Carnegie Mellon University's Language Technologies Institute, which supports several parallel efforts in the field.
Unlike commercial systems that translate Web documents word by word or work only in specific contexts like travel planning, Gao's software does what's called semantic analysis: it extracts the most likely meaning of text or speech, stores it in terms of concepts like actions and needs, and expresses the same idea in another language. For instance, the software translates the statement "I'm not feeling well" by first deciding that the speaker is probably sicknot suffering from faulty nerve endings; it then produces a sentence about the speaker's health in the target language. If enough semantic concepts are stored in the computer, it becomes easier to hook up a new language to the network: instead of having to program separate ChineseArabic and English-Arabic translators, for instance, you need only map Arabic to the existing conceptual representations.
But it's easier said than done. Spoken-word translation requires converting speech to text, making sense ofthat text, and then using speech synthesis technology to output the translation. "Building a system for understanding text is more complex than building an atomic bomb," says Sergei Nirenburg, a computer scientist at the University of Maryland, Baltimore County, who pioneered efforts in machine translation in the 198Os. In addition, a practical system must adapt to speech recognition errors, unusual word combinations, and new situations-all automatically.
To address those challenges, Gao's team at IBM combined semantic analysis with statistical algorithms that enable computers to learn translation patterns by comparing streams of text with translations done by humans. As part of an initiative by the U.S. Defense Advanced Research Projects Agency, Gao's team developed ChineseEnglish translation software for a laptop computer and more recently adapted it to run on a PDA. "You can talk about a lot of different things. The system handles daily conversational needs," says Gao. It has a vocabulary of a few thousand words and worked with 90 percent accuracy in test conversations about medical care and logistics.
"The IBM system is impressive. I see them'as setting the bar for the whole program," says Kristin Precoda, director of the Speech Technology and Research Laboratory at SRI International in Menlo Park, CA. Within the same DARPA initiative, Precoda's group has created a more specialized translation device: a one-way talking phrase book developed in collaboration with Middletown, RIbased Marine Acoustics that has been used by U.S. soldiers in Afghanistan, Iraq, and other countries to ask residents specific questions about medical care and many other topics.
While these prototypes look promising, making them practical will require more testing and programming. By late 2004, says Gao, the technology will be "robust and ready" for deployment; IBM is already in discussions with potential partners and customers. Eventually, universal translation could make business meetings, document research, and surveillance easier, while opening doors to international commerce and tourism. "In 10 years, everyone may have this on their handheld or cell phone " says Gao. At which point communicating in a new language could be as easy as plug and play. GREGORY T. HUANG
Perched on the gently sloping hills of Princeton University's brick and ivy campus, Ron Weiss's biology laboratory is stocked with the usual array of microscopes, pipettes, and petri dishes. Less typical is its location: crammed into the Engineering Quadrangle, it stands out among the electrical and mechanical engineering labs. Yet it's an appropriate spot for Weiss. A computer engineer by training, he discovered the allure of biology during graduate school-when he began programming cells instead of computers. In fact, he began to program cells as if they were computers.
Weiss is one of just a handful of researchers delving into the inchoate field of synthetic biology, assiduously assembling genes into networks designed to direct cells to perform almost any task their programmers conceive. Combined with simple bacteria, these networks could advance biosensing, allowing inspectors to pinpoint land mines or biological weapons; add human cells, and researchers might build entire organs for transplantation. "We want to create a set of biological components, DNA cassettes that are as easy to snap together, and as likely to function, as a set of Legos," says Tom Knight, an MIT computer-engineer-cum-biologist, and the graduate advisor who turned Weiss on to the idea.
Researchers trying to control cells' behavior have moved beyond proof of concept, creating different genetic "circuits"-specially devised sets of interacting genes. James J. Collins, a biomedical engineer at Boston University, created a "toggle switch" that allows chosen functions within cells to be turned off and on at will. Michael Elowitz, a professor of biology and physics at Caltech, and Stanislas Leibler of Rockefeller University have created another circuit that causes a cell to switch between glowing and non-glowing phases as levels of a particular protein change-acting as a sort of organic oscillator and opening the door to using biological molecules for computing. Together with Caltech chemical engineer Frances Arnold, Weiss himself has used "directed evolution" to fine-tune the circuits he creates, inserting a gene network into a cell, selectively promoting the growth of the cells that best perform a selected task, and repeating the process until he gets exactly what he wants. "Ron is utilizing the power of evolution to design networks in ways so that they perform exactly the way you want them to," says Collins.
Weiss has also designed sophisticated cellular systems without directed evolution. In one project, sponsored by the U.S. Defense Advanced Research Projects Agency, he has inserted a genetic circuit into normally nonsocial bacteria that enables them to communicate with each other by recognizing selected environmental cues and emitting a signal in response. He's working on another group of genes he calls an "algorithm," which allows the bacteria to figure out how far away a stimulus is and vary their reactions accordingly-in essence, creating a living sensor for almost anything. Spread bacteria engineered to respond to, say, dynamite, across a minefield, and if they're particularly close to a mine, they fluoresce green. If they're a little farther away, they fluoresce red, creating a bull's-eye that pinpoints the mine's location.
The most ambitious project Weiss has planned-though the furthest from realization-is to program adult stem cells. In the presence of the correct triggers, these unspecialized cells, found in many tissues in the body, will develop into specific types of mature cells. The idea, says Weiss, is that by prompting some cells to differentiate into bone, others into muscle, cartilage, and so on, researchers could direct cells to, say, patch up a damaged heart, or create a synthetic knee that functions better than any artificial replacement. But because mammalian cells are so complex, this is a much more daunting task than programming bacteria. So far, Weiss and his collaborators have managed to program adult stem cells from mice to fluoresce in different colors, depending on what molecule is added to their petri dish. Though these baby steps emphasize how much is left to do, they represent impressive strides in the manipulation of biology. "Because of the power and flexibility that it offers, synthetic biology will provide many benefits to existing fields," Weiss says. "But more importantly, it will also enable an array of applications in the future that we cannot even imagine today." As the synergy between engineers and biologists grows, so do fantastic possibilities for personalized medicine, sensing and control, defense-almost any field conceivable. LAUREN GRAVITZ
Few emerging technologies have offered as much promise as nanotechnology, touted as the means of keeping the decades-long electronics shrinkfest in full sprint and transfiguring disciplines from power production to medical diagnostics. Companies from Samsung Electronics to Wilson Sporting Goods have invested in nanotech, and nearly every major university boasts a nanotechnology initiative. Red hot, even within this R&D frenzy, are the researchers learning to make the nanoscale wires that could be key elements in many working nanodevices.
"This effort is critical for the success of the whole [enterprise of] nanoscale science and technology," says nanowire pioneer Peidong Yang of the University of California, Berkeley. Yang has made exceptional progress in fine-tuning the properties of nanowires. Compared to other nanostructures, "nanowires will be much more versatile, because we can achieve so many different properties just by varying the composition," says Charles Lieber, a Harvard University chemist who has also been propelling nanowire development.
As their name implies, nanowires are long, thin, and tiny-perhaps one-ten-thousandth the width of a human hair. Researchers can now manipulate the wires' diameters (from five to several hundred nanometers) and lengths (up to hundreds of micrometers). Wires have been made out of such materials as the ubiquitous semiconductor silicon, chemically sensitive tin oxide, and light-emitting semiconductors like gallium nitride.
This structural and compositional control means "we essentially can make anything we want to," says lieber, who cofounded PaIo Alto, CA-based Nanosys (to which Yang also consults) to develop nanowire-based devices. The wires can be fashioned into lasers, transistors, memory arrays, perhaps even chemical-sensing structures akin to a bloodhound's famously sensitive sniffer, notes James Ellenbogen, head of the McLean, VA-based nanosystems group at federally funded Mitre. Many of these applications require organizing nanowires into larger structures, a technical challenge that Ellenbogen credits Yang with pushing forward more than anyone.
To make the wires, Yang and his colleagues use a special chamber, inside which they melt a film of gold or another metal, forming nanometer-scale droplets. A chemical vapor, such as siliconbearing silane, is emitted over the droplets, and its molecules decompose. In short order, those molecules supersaturate the molten nanodroplets and form a nanocrystal. As more vapor decomposes onto the metal droplet, the crystal grows upward like a tree.
Doing this simultaneously on millions of metallic drops-perhaps arranged in specific patterns-allows scientists to organize massive numbers of nanowires. Yang has already grown forests of gallium nitride and zinc oxide nanowires that emit ultraviolet light, a trait that could prove useful for "lab on a chip" devices that quickly and cheaply analyze medical, environmental, and other samples.
By introducing different vapors during the growth process, Yang has also been able to vary the wires' composition, creating complex nanowires "striped" with alternating segments of silicon and the semiconductor silicon germanium. The wires conduct heat poorly but electrons well-a combination suited for thermoelectric devices that convert heat gradients into electrical currents. "An early application might be cooling computer chips," Yang predicts. Such devices might eventually be developed into highly efficient power sources that generate electricity from cars' waste heat or the sun's heat.
Difficult tasks remain, such as making electrical connections between the minuscule wires and the other components of any system. Still, Yang estimates there are now at least 100 research groups worldwide devoting significant time to overcoming such obstacles, and commercial development efforts have already begun. Last year, Intel, which is working with lieber, revealed that nanowires are part of its long-term chip planning. Smaller firms such as Nanosys and QuMat Technologies, a startup now renting space at Lund University in Sweden, are betting that nanowires will be essential components of the products they hope to sell one day, from sensors for drug discovery and medical diagnosis to flatpanel displays and superefficient lighting. When this catalogue of nanowired gizmos finally hits the market, Yang and his colleagues will have made no small contribution. IVAN AMATO
Bayesian Machine Learning
When a computer scientist publishes genetics papers, you might think it would raise colleagues' eyebrows. But Daphne Koller's research using a once obscure branch of probability theory called Bayesian statistics is generating more excitement than skepticism. The Stanford University associate professor is creating programs that, while tackling questions such as how genes function, are also illuminating deeper truths about the long-standing computer science conundrum of uncertainty-learning patterns, finding causal relationships, and making predictions based on inevitably incomplete knowledge of the real world. Such methods promise to advance the fields of foreign-language translation, microchip manufacturing, and drug discovery, among others, sparking a surge of interest from Intel, Microsoft, Google, and other leading companies and universities.
How does an idea conceived by an 18th-century minister (Thomas Bayes) help modern computer science? Unlike older approaches to machine reasoning, in which each causal connection ("rain makes grass wet") had to be explicitly taught, programs based on probabilistic approaches like Bayesian math can take a large body of data ("it's raining," "the grass is wet") and deduce likely relationships, or "dependencies," on their own. That's crucial because many decisions programmers would like to automate-say, personalizing search engine results according to a user's past queries-can't be planned in advance; they require machines to weigh unforeseen combinations of evidence and make their best guesses. Says Intel research director David Tennenhouse, "These techniques are going to impact everything we do with computers-from user interfaces to sensor data processing to data mining."
Koller unleashed her own Bayesian algorithms on the problem of gene regulation-a good fit, since the rate at which each gene in a cell is translated into its corresponding protein depends on signals from a myriad of proteins encoded by other genes. New biomedical technologies are providing so much data that researchers are, paradoxically, having trouble untangling all these interactions, which is slowing the search for new drugs to fight diseases from cancer to diabetes. Koller's program combs through data on thousands of genes, testing the probability that changes in the activity of certain genes can be explained by changes in the activity of others. The program not only independently detected well-known interactions identified through years of research but also uncovered the functions of several previously mysterious regulators. "People are limited in their ability to integrate many different pieces of evidence," says Koller. "Computers have no such limitation."
Of course, Koller isn't alone in the struggle to cope with uncertainty. But according to David Heckerman, manager of the Machine Learning and Applied Statistics Group at Microsoft Research, she has uniquely extended the visual models used by Bayesian programmers-typically, graphs showing objects, their properties, and the relationships among them-so that they can represent more complex webs of dependencies. Predicting an AIDS patient's response to a medication, for example, depends on knowing how prior patients responded-but also on the particular strains of the virus the patients carried, which strains are drug resistant, and a multitude of other factors. Older Bayesian programs couldn't handle such multilaycred relationships, but Koller found ways to "represent the added structure and reason with it and learn from it," says Heckerman.
Researchers are adapting such methods for an armada of practical applications. Among them: robots that can autonomously map hazardous, abandoned mines and programs under development at Intel that interpret test data on the quality of semiconductor wafers. In addition, several graduates of Roller's lab have joined Google, where they are using Bayesian methods to find and exploit patterns in the vast amount of interconnected data on the Web.
Programs that employ Bayesian techniques are already hitting the market: Microsoft Outlook 2003, for instance, includes Bayesian office assistants. English firm Agena has created Bayesian software that recommends TV shows to satellite and cable subscribers based on their viewing habits; Agena hopes to deploy the technology internationally. "These things sound far out," says Microsoft researcher Eric Horvitz, who, with Heckerman, is a leading proponent of probabilistic methods. "But we are creating usable tools now that you'll see in the next wave of software." WADE ROUSH
With the human eye responsive to only a narrow slice of the electromagnetic spectrum, people have long sought ways to see beyond the limits of visible light. X-rays illuminate the ghostly shadows of bones, ultraviolet light makes certain chemicals shine, and near-infrared radiation provides night vision. Now researchers are working to open a new part of the spectrum: terahertz radiation, or t-rays. Able to easily penetrate many common materials without the medical risks of x-rays, t-rays promise to transform fields like airport security and medical imaging, revealing not only the shape but also the composition of hidden objects, from explosives to cancers.
In the late 1990s, Don Arnone and his group at Toshiba's research labs in Cambridge, England, were eyeing t-rays as an alternative to dental x-rays. The idea was that t-rays, operating in the deep-infrared region just before wavelengths stretch into microwaves, would be able to spot decay without harmful ionizing radiation. In tests, the researchers fired powerful but extremely short pulses of laser light at a semiconductor chip, producing terahertz radiation (so called because it has frequencies of trillions of waves per second). Passing through gaps or different thicknesses of material changes the rays' flight time, so by measuring how long each t-ray took to pass through an extracted tooth and reach a detector, the researchers were able to assemble a 3-D picture of the tooth.
Toshiba soon decided that the technique, while promising, didn't really fit its business. So in 2001 the company spun off a new venture, TeraView, with Arnone as CEO. Last August, TeraView started selling evaluation versions of a t-ray scanner, with major production planned to begin in a year or two. The machine looks-and works-much like a photocopier. An object sits on the imaging window, the t-ray beam passes across it, a detector measures the transmitted rays, and a screen displays the image. A separate probe arm scans objects that won't fit on the window.
Xi-Cheng Zhang, director of the Center for Terahertz Research at Rensselaer Polytechnic Institute, warns that the technology is far from mature. However, he notes, "we cannot afford not to investigate it." Indeed, several firms are already testing the TeraView scanner. Consumer electronics companies could use t-rays to check devices for manufacturing flaws. Food processors could probe the water content of sealed packages to ensure freshness. In fact, any sealed container can be probed for quality-assurance purposes. "Every factory in the world that uses a plastic or cardboard box could use one of these things, in principle," says Daniel Mittleman, a terahertz researcher at Rice University. But that's just the beginning.
Security seems another natural application. Because different chemical structures absorb them differently, t-rays could be used to identify hidden materials. TeraView is in talks with both the U.K. and U.S. governments to develop a scanner that could be used alongside metal detectors. "You can do things like look at razor blades in coat pockets or plastic explosives in shirt pockets," Arnone says. The company is building a library of spectral fingerprints of different materials.
T-ray systems might also be useful for identifying skin cancers or, with further development, breast cancers. They could show the shape of tumors and help doctors excise diseased tissue more accurately. "Because tumors tend to retain more water, they show up very brightly in terahertz images," Arnone says. "[T-rays] may fill important gaps between x-ray, MRI, and the naked eye of the physician."
Other companies are getting into the act. Japanese camera maker Nikon has developed its own t-ray scanner. Ann Arbor, MI, startup Picometrix recently sold NASA a scanner to search for gaps in space shuttles' foam insulation. And laser manufacturer Coherent in Santa Clara, CA, is one of several groups trying to develop cheaper, more compact laser sources that will make t-ray systems easier to build. In the part of the spectrum between the domains of cell phones and lasers, t-rays could shed light on mysteries hidden from even today's most technologically enhanced eyes. NEIL SAVAGE
Whether it's organizing documents, spreadsheets, music, photos, and videos or maintaining regular backup files in case of theft or a crash, taking care of data is one of the biggest hassles facing any computer user. Wouldn't it be better to store data in the nooks and crannies of the Internet, a few keystrokes away from any computer, anywhere? A budding technology known as distributed storage could do just that, transforming data storage for individuals and companies by making digital files easier to maintain and access while eliminating the threat of catastrophes that obliterate information, from blackouts to hard-drive failures.
Hari Balakrishnan is pursuing this dream, working to free important data from dependency on specific computers or systems. Music-sharing services such as KaZaA, which let people download and trade songs from Internet-connected PCs, are basic distributed-storage systems. But Balakrishnan, an MIT computer scientist, is part of a coalition of programmers who want to extend the concept to all types of data. The beauty of such a system, he says, is that it would provide allpurpose protection and convenience without being complicated to use. "You can now move [files] across machines," he says. "You can replicate them, remove them, and the way in which [you] get them is unchanged." With inability to access data sometimes costing companies millions in revenue per hour of downtime, according to Stamford, CTbased Meta Group, a distributed-storage system could dramatically enhance productivity.
Balakrishnan's work centers on "distributed hash tables," an update on a venerable computerscience concept. Around since the 1950s, hash tables provide a quick way to organize data: a simple mathematical operation assigns each file its own row in a table; the row stores the file's location. Such tables are now ubiquitous, forming an essential part of most software.
In the distributed-storage scheme pursued by Balakrishnan and his colleagues, files are scattered around the Internet, as are the hash tables listing their locations. Each table points to other tables, so while the first hash table searched may not list the file you want, it will point to other tables that will eventually-but still within milli seconds-reveal the file's location. The trick is to devise efficient ways to route data through the network-and to keep the tables up to date. Get it right and distributed hash tables could turn the Internet into a series of automatically organized, easily searchable filing cabinets. Balakrishnan says, "I view distributed hash tables as the coming future" of networked storage.
Balakrishnan's work is part of IRIS, the Infrastructure for Resilient Internet Systems project, a collaboration among researchers at MIT, the University of California, Berkeley, the International Computer Science Institute in Berkeley, CA, New York University, and Rice University. The effort, funded by the National Science Foundation, has no director (Balakrishnan always uses "we" and "us" when describing the work). Its research includes several distributed-storage projects, including OceanStore, which seeks to prove the basic concepts of distributed-storage networks (see "The Internet Reborn," TR October 2003). Another MIT researcher, Frans Kaashoek, is developing a prototype that automatically backs up data by routinely taking file system "snapshots" and distributing them around the Internet.
It will be at least five years before the impact of IRIS becomes clear. Balakrishnan says the group still has to figure out how to track file updates across multiple storage sites and whether distributed hash tables should be built into the Internet foundation or incorporated into individual applications-as well as the answers to basic security questions.
But it's the fundamental power of the technology that excites many computer scientists. "What's striking about it is its huge variety of applications," says Sylvia Ratnasamy, a researcher at Intel's laboratory at Berkeley who is exploring ways that distributed storage might change the basic operation of the Internet. "Not very many technologies have that broad potential."
Stay tuned. Turning the Internet into a filing cabinet may be just step one. MICHAEL FITZGERALD
From heart disease to hepatitis, cancer to AIDS, a host of modern ailments are triggered by our own errant genes-or by those of invading organisms. So if a simple technique could be found for turning off specific genes at will, these diseases could-in theory-be arrested or cured. Biochemist Thomas Tuschl may have found just such an off switch in humanS: RNA interference (RNAi). While working at Germany's Max Planck Institute for Biophysical Chemistry, Tuschl discovered that tiny double-stranded molecules of RNA designed to target a certain gene can, when introduced into human cells, specifically block that gene's effects.
Tuschl, now at Rockefeller University in New York City, first presented his findings at a meeting in Tokyo in May 2001. His audience was filled with doubters who remembered other much hyped RNA techniques that ultimately didn't work very well. "They were very skeptical and very critical," recalls Tuschl. What the skeptics didn't realize was that RNAi is much more potent and reliable than earlier methods. "It worked the first time we did the experiment," Tuschl recalls. Within a year, the doubts had vanished, and now the technique has universal acceptance-spawning research at every major drug company and university and likely putting Tuschl on the short list for a Nobel Prize.
The implications of RNAi are breathtaking, because living organisms are largely defined by the exquisitely orchestrated turning on and off of genes. For example, a cut on a finger activates blood-clotting genes, and clot formation in turn shuts them down. "Just about anything is possible with this," says John Rossi, a molecular geneticist at the City of Hope National Medical Center in Duarte, CA, who advises Australian RNAi startup Benitec. "If you knock out gene expression, you could have big impacts on any disease, any infectious problem." Pharmaceutical companies are already using RNAi to discover drug targets, by simply blocking the activity of human genes, one by one, to see what happens. If, for instance, a cancer cell dies when a particular gene is shut down, researchers can hunt for drugs that target that gene and the proteins it encodes. Screening the whole human genome this way "is not complicated," Tuschl points out.
Now drug companies, along with biotech startups and academic researchers, are seeking to use RNAi to treat disease directly. In fact, Tuschl cofounded one such startup, Alnylam Pharmaceuticals in Cambridge, MA (see "The RNA Cure?" TR November 2003), which hopes to create RNAi drugs to treat cancer, AIDS, and other diseases. For example, silencing a key gene in the HIV virus could stop it from causing AIDS; knocking out the mutated gene that causes Huntington's could halt the progression of the disease; and turning off cancer genes could shrink tumors. "It's going to be a very, very powerful approach," says Rossi.
The interference process works by preventing the gene from being translated into the protein it encodes. (Proteins do most of the real work of biology.) Normally, a gene is transcribed into an intermediate "messenger RNA" molecule, which is used as a template for assembling a protein. When a small interfering RNA molecule is introduced, it binds to the messenger, which cellular scissors then slice up and destroy.
The biggest hurdle to transforming RNAi from laboratory aide to medicine is delivering the RNA to a patient's cells, which are harder to access than the individual cells used in lab experiments. "That's the major limitation right now," says Rossi, who nevertheless predicts that RNAi-based therapies could be on the market "within maybe three or four years." Tuschl is more cautious. he thinks the technique's first applications-say, local delivery to the eye to treat a viral infection-may indeed come that soon. But he says it could take a decade or longer to develop a . system that effectively delivers RNAi drugs to larger organs or the whole body.
Tuschl's lab is one of many now teasing out the precise molecular mechanisms responsible for RNA interference's remarkable potency, hoping to help realize the payoffs of RNA drugs sooner rather than later. Presuming the tiny RNA molecules can fulfill the promise of their fast start, traditional molecular biology will be turned on its head. KEN GARBER
Power Grid Control
Power grids carry the seeds of their own destruction: massive flows of electricity that can race out of control in just seconds, threatening to melt the very lines that carry them. Built in the days before quick-reacting microprocessors and fiber optics, these networks were never designed to detect and squelch systemwide disturbances. Instead, each transmission line and power plant must fend for itself, shutting down when power flows spike or sag. The shortcomings of this system are all too familiar to the 50 million North Americans from Michigan to Ontario whose lights went out last August: as individual components sense trouble and shut down, the remaining power flows become even more disturbed, and neighboring lines and plants fall like multimilliondollar dominoes. Often-needless shutdowns result, costing billions, and the problem is only expected to get worse as expanding economies push more power onto grids.
Christian Rehtanz thinks the time has come for modern control technology to take back the grid. Rehtanz, group assistant vice president for power systems technology with Zurich, Switzerland-based engineering giant ABB, is one of a growing number of researchers seeking to build new smarts into grid control rooms. These engineers are developing hardware and software to track electric flows across continent-wide grids several times a second, identify disturbances, and take immediate action. While such "wide area" control systems remain largely theoretical, Rehtanz and his ABB colleagues have fashioned one that is ready for installation today. If their design works as advertised, it will make power outages 100 times less likely, protecting grids against everything from consumption-inducing heat waves to terrorism. "We can push more power through the grid while, at the same time, making the system more predictable and more reliable," says Rehtanz.
Real-time control systems are a natural outgrowth of a detection system pioneered in the 1990s by the U.S.-government-operated Bonneville Power Administration, which controls grids in the Pacific Northwest. In this system, measurements from sensors hundreds to thousands of kilometers apart are coded with Global Positioning System time stamps, enabling a central computer to synchronize data and provide an accurate snapshot of the entire grid 30 times per second-fast enough to glimpse the tiny power spikes, sags, and oscillations that mark the first signs of instability. An earlier version of Bonneville's system helped explain the dynamics of the 1996 blackout that crippled 11 western U.S. states, Alberta, British Columbia, and Baja California; western utilities subsequently rejiggered their operations and have thus far avoided a repeat. "I know the people back east sure wish they had one right now," says Carson Taylor, Bonneville's principal engineer for transmission and an architect of its wide-area system.
But Rehtanz is eager to take the next step, transforming these investigative tools into realtime controls that detect and squelch impending blackouts. The technical challenge: designing a system that can respond quickly enough. "You have half a minute, a minute, maybe two minutes to take action," says Rehtanz. That requires spartan calculations that can crunch the synchronized sensor data, generate a model of the system to detect impending disaster, and select an appropriate response, such as turning on an extra power plant. Control algorithms designed by Rehtanz and his colleagues employ a highly simplified model of how a grid works, but one that they believe is nevertheless capable of instantly identifying serious problems brewing-and on a standard desktop computer. ABB engineers are now studying how such algorithms could protect a critical power corridor linking Switzerland and Italy that failed last September, blacking out most of Italy.
Many utilities are already implementing elements of real-time grid control-for example, installing digital network controllers that can literally push power from one line to another or suppress local spikes and sags (see "Power Gridlock," TR July/August 2001). Tied into a wide-area control scheme, these network controllers could perform more intelligently. Still, it may be years before a utility takes the plunge and fully commits to Rehtanz's algorithms. It's not just that utilities are conservative about tinkering with untried technologies; cash for transmission upgrades is thin in today's deregulated markets, where it's unclear which market players-power producers, transmission operators, or government regulators-should pay for reliability. What is clear, however, is that the evolution toward realtime, wide-area sensing and control has begun.
Microfluidic Optical Fibers
The blazing-fast Internet access of the future-imagine downloading movies in seconds-might just depend on a little plumbing in the network. Tiny droplets of fluid inside fiber-optic channels could improve the flow of data-carrying photons, speeding transmission and improving reliability. Realizing this radical idea is the goal of University of Illinois physicist John Rogers, whose prototype devices, called microfluidic optical fibers, may be the key to superfast delivery of everything from e-mail to Web-based computer programs, once "bandwidth" again becomes the mantra.
Rogers began exploring fluid-filled fibers more than two years ago as a researcher at Lucent Technologies' Bell Labs. While the optical fibers that carry today's phone and data transmissions consist of glass tubing that is flexible but solid, Rogers employs fibers bored through with microscopic channels, ranging from one to 300 micrometers in diameter, depending on their use. While Rogers didn't invent the fibers, he and his team showed that pumping tiny amounts of various fluids into them-and then controlling the expansion, contraction, and movement of these liquid "plugs"-causes the optical properties of the fibers to change. Structures such as tiny heating coils printed directly on the fiber precisely control the size, shape, and position of the plugs. Modifying the plugs' properties enables them to perform critical functions, such as correcting error-causing distortions and directing data flows more efficiently, thus boosting bandwidth far more cheaply than is possible today.
Today, these tune-up jobs are partly done by gadgets that convert light signals into electrons and then back into photons. This "removal of light" invariably causes distortions and losses. Rogers's idea is to do these jobs more directly by replacing today's gadgets with sections of fluidfilled optical fibers strategically placed in the existing network. Making sections of the fiber itself tunable could eliminate some of these "light-removing" components, Rogers says. "Anytime you can avoid the need to remove light, there is a big cost advantage, reliability advantage, and increase in capacity."
Other approaches to making fibers that actively tune light-as opposed to serving as passive pipes-are also under development. But with the telecom sector still in crash mode, leaving thousands of kilometers of underground fiber-optic cables unused, nobody expects a rapid embrace of new optical communications technologies. "These kinds of things are needed when you get to the next-generation optical networks," notes Dan Nolan, a physicist at Corning, a leading maker of optical fiber. "Right now you don't really need them, because the next generation has been put off."
Few, though, question that a push to a much faster Internet will eventually return. And when it does, Nolan says, devices like Rogers's could come into play. "I consider it very important research," Nolan adds. Though the timing for commercialization is uncertain, the fibers have already moved beyond lab demonstrations; prototype devices are being tested at both Lucent and its spinoff company OFS, a Norcross, GA-based opticalfiber manufacturer.
Still, the idea of adding a plumbing system to optical networks is jarring to some researchers. "Success will ultimately depend on how well you can put in the solution without disrupting the ends of the fiber," says Axel Scherer, a physicist at Caltech. "The question is, how do you do that in an easy and inexpensive way." MIT physicist John Joannopoulos holds similar reservations. But if the fluidics system works, Joannopoulos says, "it gives you extra control. Once you have that, then you can make devices out of these fibers, not just use them to transport something."
The marriage of optics and tiny flows of fluid also holds promise for other applications. One possibility Rogers is investigating: a tool that could use light to detect substances like diseaseindicating proteins in blood, useful for medical diagnosis or drug discovery. Even if it doesn't speed your downloads, Rogers's plumbing might still improve doctors' checkups. DAVID TALBOT
Three billion. That's the approximate number of DNA "letters" in each person's genome. The Human Genome Project managed a complete, letter-by-letter sequence of a model human-a boon for research. But examining the specific genetic material of each patient in a doctor's office by wading through those three billion letters just isn't practical. So to achieve the dream of personalized medicine-a future in which a simple blood test will determine the best course of treatment based on a patient's genes-many scientists are taking a shortcut: focusing on only the differences between people's genomes.
David Cox, chief scientific officer of Perlegen Sciences in Mountain View, CA, is turning that strategy into a practical tool that will enable doctors and drug researchers to quickly determine whether a patient's genetic makeup results in greater vulnerability to a particular disease, or makes him or her a suitable candidate for a specific drug. Such tests could eventually revolutionize the treatment of cancer, Alzheimer's, asthmaalmost any disease imaginable. And Cox, working with some of the world's leading pharmaceutical companies, has gotten an aggressive head start in making it happen.
Genetic tests can already tell who carries genes for certain rare diseases like Huntington's, and who will experience the toxic side effects of a few particular drugs, but each of these tests examines only one or two genes. Most common diseases and drug reactions, however, involve several widely scattered genes, so researchers want to find ways to analyze an individual's whole genome. Since most genetic differences between individuals are attributable to single-letter variations called single-nucleotide polymorphisms, or SNPs, Cox believes that identifying genomewide patterns of these variants that correspond to particular diagnoses or drug responses is the quickest, most cost-effective way to make patients' genetic information useful. "I would like to know whether genetics is going to be practical while I'm still alive," says Cox.
To help answer that question, in 2000 Cox left his position as codirector of the Stanford University Genome Center to cofound Perlegen, which has moved vigorously to bring SNP analysis to the clinic. The company has developed special DNA wafers-small pieces of glass to which billions of very short DNA chains are attachedthat can be used to quickly and cheaply profile the millions of single-letter variants in a patient's genome. Perlegen researchers first created a detailed map of 1.7 million of the most common SNPs. Based on this map, they then designed a wafer that can detect which version of each one of these variants a specific patient has.
Now, in partnership with major pharmaceutical makers, the company is comparing genetic patterns found in hundreds of people with, for example, diabetes to those of people without it. With Pfizer, Perlegen is examining genetic contributions to heart disease; for Eli Lilly, Bristol-Myers Squibb, and GlaxoSmithKline, Perlegen researchers are hunting for SNP patterns that correlate to particularly adverse or favorable reactions to different drugs. The next step is to use this information to design a simple test that discerns telltale SNP patterns. With such a test, doctors could screen patients to identify the best drug regimen for each.
Some biologists argue that a truly accurate picture of an individual's genetics requires decoding his or her entire genome, down to every last DNA letter; but for now that is a daunting technical challenge that remains prohibitively expensive. Cox counters that SNP analysis is the quickest way to practically bring genetics and medicine together, and many geneticists share his vision of ultimately analyzing SNPs right in a doctor's office. "I think this will become a routine thing in the future," says George Weinstock, codirector of the Human Genome Sequencing Center at the Baylor College of Medicine in Houston, TX. And, adds Weinstock, "Perlegen is one of the leaders in the field."
Within a few years, genetic screening to predict a patient's drug response may become commonplace. To make that happen, it will take tools like the ones Cox and his coworkers at Perlegen are already beginning to employ. CORIE LOK TR
|^ Back to Top||« Back to Results||Article 1||Publisher Information|
|Mark Article||Citation , Full Text , Text+Graphics , Page Image - PDF|
|Copyright © 2004 ProQuest Information and Learning Company. All rights reserved. Terms and Conditions|