New Scientist vol 159 issue 2144 - 25 July 1998, page 56
Talking Nets edited by James Anderson and Edward Rosenfeld, MIT Press, £31.95/$39.95, ISBN 0262011670
TYPE "neural network" into an Internet search engine, say AltaVista, and it returns with more than 70 000 items. Then pick a site. Take, for instance, Los Alamos Nonlinear Adaptive Computation at http://www-xdiv.lanl.gov/XCM/neural/. Head for the FAQs at ftp://ftp.sas.com/pub/neural/FAQ.html and you'll discover what a neural network is, or, more properly, what an artificial neural network is because all this work relates to mathematical models rather than biological neural networks.
Neural networks are a hot topic—and have been since the late 1980s although their history can be traced to the 1950s.This programming technique, which is inspired by biologists' understanding of how neurons work, tries to simulate some of the brain's functions in computer code and silicon, and has proved successful in pattern recognition and classification. Neural networks complement conventional computing methods in medicine, engineering and commerce.
In Talking Nets, James Anderson and Edward Rosenfeld interview 17 researchers who have played a significant role in the development of neural networks to trace the history of this now popular style of computing from the late 1940s to 1997. But this is no sterile list of dates and discoveries. It illuminates how and why people become fascinated with trying to simulate brain processes. It also tells the story of "how science is actually done, including the false starts, and the Darwinian struggle for jobs, resources and reputation".
Anderson is a scientist working with neural networks, so appears as both interviewee and interviewer. Rosenfeld says he's "a journalist who has chronicled neural net development for more than a decade". Both have the confidence of their subjects, who are happy to provide professional and personal details.
The 17 are a distinguished bunch. Among them are Michael Arbib, David Rumelhart, Terrence Sejnowski and Paul Werbus. All have extensive experience teaching and lecturing, and, say the editors, "Most of them shaped their own narrative, with only modest prodding from us."
One of these "prods" is their inquiry into why their subjects became interested in the first place. They asked all the interviewees about their childhood, and any activities that might have indicated a future interest in neural networks. For example, Leon Cooper, director of the Institute for Brain and Neural Systems at Brown University, Rhode Island, described how his interest in science developed at a very early age. He had his own laboratory for chemistry and electrical experiments by the age of 10. Cooper's interest in science continued during his school years at the Bronx High School of Science and through his first degree at Columbia University, where he decided to major in physics. Cooper says: "I'd always had an interest in deep philosophical ideas. Mind-body problems, that sort of thing." During his career as a researcher in nuclear physics, when an opportunity arose to broaden his field and consider "the nature of the thinking process", Cooper jumped at the chance.
Many other interviewees also highlight their early forays into science and their quests for a greater understanding of human consciousness. Some of the others cite amateur radio as triggering their interest in how things worked. For example, Teuvo Kohonen, now at the Helsinki University of Technology in Finland, built radio sets as a youth. He also reveals that gestalt psychology was one of his childhood hobbies, because he was interested in "what was going on in the head". By the age of 16, Kohonen says that he had begun to think about how simple neural networks might work.
Other interviewees toyed with the arts first. At 78, Jerome Lettvin, emeritus professor of electrical and biomedical engineering at MIT, is the oldest interviewee. He wanted to become a poet as a young child, but his mother pushed him into medicine. This led him to neurology and ultimately to neural networks.
Bart Kosko (the youngest interviewee at 38), professor of engineering at the University of Southern California, Los Angeles, first went to college on a music scholarship. His ideas about music, however, differed from those of his professors. Kosko eventually managed to gain two degrees in what had, until then, been his hobbies: philosophy and economics. His philosophy degree led him to work on fuzzy logic, and the economics led to research in neural networks.
Once the interviewees reach their postgraduate studies and careers, the focus switches to technological achievements, mentors and detractors. The people responsible for founding cybernetics and the principles of modern brain theory—Norbert Wiener, Warren McCulloch, Walter Pitts and Frank Rosenblatt—appear in nearly all the interviews. Some, such as Lettvin, were their contemporaries, while others merely recall the anecdotes, notably the famous falling out between Wiener and McCulloch. Rosenblatt's development of the simple perceptron—a single-layer neuron model—also crops up many times, hardly surprising since it caused such controversy. Perceptrons, written in the 1960s by Marvin Minsky and Seymour Papert, argued that if you limit your network to one layer in depth you cannot do very much with it unless you use very complicated "neurons".
The publication of Perceptrons is viewed by many in Talking Nets as one of the reasons why research money switched from systems that "learnt" to explicitly programmed, knowledge-based approaches during the 1970s—the Dark Ages for neural networks when interest in techniques and ideas was low. This remained the case until 1985 when David Rumelhart, Geoffrey Hinton and others published a paper describing back-propagation, a method of supervised training that teaches the network using examples, comparing the actual and expected outputs. Several interviewees point out that other researchers, such as Werbos, had also proposed this approach at an earlier date, but failed to win the recognition.
Controversies, false starts, funding fashions and coincidental meetings combine in this book to tell the story of real science. Talking Nets is aimed as much at those who are interested in scientific discovery and the drive for progress as at those who want to build a faster computer. And you'll find Talking Nets an enjoyable and informative read whether or not you can describe the workings of back-propagation or the failures of the perceptron.
Claire Neesham
Claire Neesham is a freelance journalist