Sense | Flux |
---|---|
Sight | Electromagnetic waves in the visible spectrum |
Hearing | Pressure waves between about 30 and 20000Hz |
Smell | Concentration of various airborne molecules | Taste | Concentration of various ions etc. in saliva |
Touch | Pattern of pressure at points on animal's surface |
Proprioception | Pattern of nerve impulses from muscles |
One can consider everyday computers to have a limited set of senses: they can interpret key depressions and mouse movements. More recent machines like the iPhone, iPad, and the multitude of related touch-screen based devices have senses that allow them to detect gestures made on the screen. One aim of the work implied here could be to increase this flexibility, possibly by providing hearing and sight for computer systems. This could certainly increase the range of inputs available to the machines, and conceivably make them easier to use. Clearly, more autonomous machines have more need of senses, if only to permit their (fragile) systems to survive in a hostile environment. On the other hand, Keating [1] makes the point that one needs to match the sensor sophistication to the machine's internal capacity and function.
At first sight, one might imagine that adding senses to a desktop computer would not be useful: however, if one compares the sophistication of the display with that of the input devices (keyboard and mouse) one rapidly realises that the input devices lag way behind. In the last 10 years, screens have improved enormously in quality, yet the last improvement in input devices (for desktop computers) was the mouse. (As noted above, palmtop machines now have touch-sensitive displays, and these can respond to gestures, including multiple-touch gestures: yet these have not really made their way in quantity to the realm of desktops, which is perhaps a little strange.) Keyboards have not altered materially in many years. Input based on sensing includes sound input (and that, in turn, includes speech), visual input, and even intelligent usage of keyboard and mouse input (are key-depressions frequent or infrequent, often incorrect or always right, is mouse usage smooth or jerky, etc.). Indeed, one can imagine the desktop machine merging with the mobile robot to produce a synthesis in which the static computer becomes a thing of the past. The limitations on these machines are imposed primarily by our imaginations!
I recently wrote a review of Neuromorphic systems for the proceedings of Brain Inspired Cogniive Systems 2008. (For older material, see the book, Neuromorphic Systems: Engineering Silicon from Neurobiology, World Scientific, 1998: this book developed from the 1st European Workshop on Neuromorphic Systems (Stirling, 29-31 August 1997).)
The biggest group in Europe is in the Institute of Neuroinformatics, jointly run by the University of Zurich, and ETH Zurich. There is a page describing biologically based work at the Dept of Artificial Intelligence at the University of Edinburgh.
My own primary work is on the auditory system. We are curently working with Edinburgh on the development a novel multi-sensor microphone which can detect sounds using mutlitpl independently gain-adjustable sensors based on MEMS technology. In the past I also worked on the statistics of sound signals: this work aimed to look at what regularities there are in sounds, and to use that (eventually) to guide processing.
A page of references and WWW pages etc is under construction.
If you have any difficulties accessing this page, or you have any queries/suggestions arising from this page, please email: