T.P. Zahn, R. Izak, K. Trott, P. Paschke, Dept of Neuroinformatics, P.O. Box 100665, T. U. Ilmenau, D-98684 Ilmenau, Germany.
The presented system strives to model the human ability to separate unknown sounds under natural conditions. Based on biology, information is coded into spike trains on an early receptive level and processed throughout network. A single type of pulse propagating integrate and fire cells accounts for the base of the entire design. Acoustical attention is guided by the novelty of a sound. Locally distributed hebbian learning and fully parallel real time processing is achieved by an analog VLSI implementation. For the optimization of different network sizes and connectivity a module based layout generator has been developed. The system is designed to interact with visual and sensomotoric information at the autonomous robot platform MILVA.