Lectures
The talks are listed in order of presentation.
Towards Neurally Integrated High Degrees of Freedom Prosthetic Limbs
Ralph Etienne-Cummings, Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218
Abstract:

Recently, neurally integrated prosthetic limbs have been receiving a lot of attention. To realize these devices, the recording and decoding of neural information in the peripheral and central nervous system are required. Furthermore, to drive the prosthesis, it also requires the understanding of the motor control and output afforded by the spinal cord. Hence, this talk covers the neuroscience and robotic control for both lower and upper limb prostheses. We argue that there are significant commonalities in the neural control of both locomotion (lower limb) and dexterous manipulation (upper limb), although they initially appear to be very different. We show that similar, cross-coupled oscillator based models can be used to control both types of prosthetic systems.
For locomotion, we develop neuromorphic integrated circuits (IC) models of spinal circuits and show the first example of a silicon model of spinal circuits being used to restore walking in an in vivo paralyzed cat prep. For dexterous manipulation, we demonstrate ~90% accuracy in the decoding of surface EMG signals for individual finger movements. We show that similar accuracy can be obtained for a transradial amputee. A virtual reality environment containing a biophysically realistic model of an arm is used to display the decoding efficacy. Lastly, we show that decoding of cortical (motor cortex M1) neural activity can also be used to actuate individual fingers in a prosthetic hand (a robotic hand). Neural recordings from a macaque monkey is decoded and used to “play” a piano. This is a demonstration of what could be done if real-time recordings from chronically implanted electrode arrays were available.
source:lectures08/Etienne-Cummings.pdf


Neurorobotics
M. Anthony Lewis, University of Arizona

Neurorobots are a new simulation tool useful for understanding the behavior of humans and animals. We make the distinction between neurorobots and biologically inspired robots, which seek to apply animal solutions to robot problems. We review classic robotics including kinematics, inverse kinematics, statics, dynamics, and trajectory planning. We note that the geometric approach has been very successful. However, the brain does not use trigonometric functions to generate behavior. Neuronal networks provide a source biological theories. Even simple ANN have the power to subsumes much of the geometric method. We review the basis of computational model of neurons, demonstrate practical models that can allow the construction massive, neuronal networks using biologically realistic neurons. We review classic works in neurorobotics including Beer, Taga and Kimura. Finally we give preliminary results in a new robot model of the human leg and demonstrate confirm a hypothesis of the function of biarticulate muscles and we generate new questions in biarticualte muscles for investigation.
source:lectures08/Tony_Lewis.pdf

Auditory Filters – recent work connecting psychophysics, physiology, mechanics, and neuromorphic implementations
Richard F. Lyon, Google Research, Mountain View CA

Abstract:

A long history of auditory filter models based on parameterized shapes in the frequency domain to predict psychophysical masked thresholds is here extended by a new shape based on a neuromorphic filter cascade. This pole–zero filter cascade (PZFC) is also an improvement on the Mead–Lyon all-pole filter cascade (APFC) that has been the mainstay of the neuromorphic hearing community, and maintains the good properties of those filters, as well as of the all-pole and one-zero gammatone filters (APGF and OZGF) filters, while providing a more accurate link to underlying traveling-wave physics in the cochlea. The linear-system parameterizations of these filters are furthermore easy to control dynamically to model the nonlinear adaptation of the cochlea to sound stimuli. Thus, psychophysical and physiological experiments and knowledge connect well to the kinds of filters that neuromorphs can build easily. These filters are also easy to use in the domain of digital computers, and are being applied to our machine-hearing research at Google.
source:lectures08/Telluride_2008_Lyon.pdf

Analogue VLSI implementations of two dimensional, nonlinear, active cochlea models
Tara Julia Hamilton, The University of Sydney, Australia

Abstract:

For over 20 years the cochlea has been modelled in silicon. In this presentation I briefly discuss the physiology of the human cochlea with an emphasis on its nonlinear, active properties. I then give an historical overview of the silicon cochleae that have been fabricated. These are divided into four categories: one dimensional (1D) silicon cochleae, two dimensional (2D) silicon cochleae, alternative silicon cochleae designs and silicon cochleae that incorporate the nonlinear and active properties of the mammalian cochlea.
In the second part of the talk I discuss my work on building nonlinear, active silicon cochleae. I introduce my active silicon cochlea model and the importance of the Hopf bifurcation and parametric amplification in this model. I then briefly discuss the three integrated circuits that were fabricated with versions of this active cochlea model. This presentation concludes with results from the three fabricated silicon cochleae and comparisons of these results with biological data.
source:lectures08/Tara_Hamilton.pdf

Visual Processing
David J. Heeger, Department of Psychology and Center for Neural Science, New York University, New York, NY 10003

Abstract:

I will review the basics of vision science and visual neuroscience, with a focus on two guiding principles (functional specialization and computational theory) driving research in the field.
Selected references:

Heeger DJ, Simoncelli EP, Movshon JA, Computational Models of Cortical Visual Processing, Proc. Nat'l Acad. Sci. USA, 93:623-627, 1996.

Carandini M, Heeger DJ, Movshon JA, Linearity and normalization of simple cells of the macaque primary visual cortex, Journal of Neuroscience, 17:8621-8644, 1997.

Simoncelli EP & Heeger DJ, A model of neuronal responses in visual area MT, Vision Research, 38:743-761, 1998.

source:lectures08/Heeger-visual-processing.pdf

Neuromorphogenic Adaptation
Jimmy Abbas, Arizona State University

Neural systems often adapt in response to the patterns of activity across the network of neurons. This type of adaptation, or activity-dependent plasticity, is likely to be the primary process involved as a child learns how to ride a bike or as a spinal cord injured person re-learns how to walk. Several rehabilitation technologies are designed to promote adaptation in neural systems and recovery of function by tapping into these processes of activity-dependent plasticity. This talk will describe the use of neuromorphic technologies to promote adaptation in a rehabilitation setting. We have designed and developed a system to control movements using electrical stimulation of paralyzed muscles that is based on a model of the spinal cord circuitry responsible for controlling locomotion. The rationale for this approach is that neuromorphic technology that operates like a nervous system may be readily integrated with the biological system and may be highly effective in promoting adaptation. Results will be presented from evaluations of this technology in computer simulation studies, in a rat model of spinal cord injury, and in studies on people with spinal cord injury.
source:[to be uploaded to SVN]

Neurocinematics
David J. Heeger, Department of Psychology and Center for Neural Science, New York University, New York, NY 10003

Abstract:

Real-world events unfold at different time scales, and therefore cognitive and neuronal processes must likewise occur at different time scales. We present a novel procedure that identifies brain regions responsive to sensory information accumulated over different time scales. We measured fMRI activity while observers viewed silent films presented forward (the original intact films), backward, or piecewise-scrambled in time. We then compared the reliability of the responses in each brain area to the intact films with that obtained when the temporal structure was disrupted. Early visual areas (e.g., V1 and MT+) exhibited high response reliability regardless of temporal structure. In contrast, the reliability of responses in several higher brain areas, including the superior temporal sulcus (STS), precuneus, posterior lateral sulcus (LS), temporal parietal junction (TPJ) and frontal eye field (FEF), were affected by information accumulated over longer time scales. These regions showed highly reproducible responses for repeated forward, but not for backward or piecewise-scrambled presentations. Responses in LS, TPJ and FEF depended on information accumulated over long durations (~ 36 s), STS and precuneus over intermediate durations (~12 s), and early visual areas over short durations (< 4 s). The dependence of the fMRI responses on temporal order could not be attributed to differences in eye movements. The measured eye positions were independent of temporal order, and were equally reliable for forward and backward presentations. That is, observers fixated on similar image locations for similar durations, but in the opposite order, when the films were presented backwards. Moreover, the reproducibility of the eye movements suggests a comparable level of engagement while observers viewed the forward and backward films, removing potential concerns that the unreliable responses to the backward films were because observers paid less attention to them. We found a clear dissociation between the reliability of the responses and response amplitudes. For example, in the LS and TPJ we observed large response amplitudes for all films, but the responses to the scrambled and time-reversed films were much less reliable than the responses to the intact forward films. In a separate behavioral study we confirmed that playing the films backward had a great impact on their intelligibility. We interpret the strong response amplitudes as reflecting incessant processing, aimed to extract meaningful information from the stimuli. At the same time, the low reliability of the responses to temporally disrupted movies indicates a failure to attain a consistent/stereotypical sequence of neural (and cognitive) states. We conclude that, similar to the known cortical hierarchy of spatial receptive fields, there is a hierarchy of progressively longer temporal receptive windows in the human brain. Response reliability provides information about neural processing that is complementary to that derived from more traditional measurements of response amplitude, and can uncover phenomena that response amplitudes alone do not reveal, such as the long temporal receptive windows found in this study.

I will also describe a new method for assessing the effect of a given film on viewers’ brain activity. Brain activity was measured using functional magnetic resonance imaging (fMRI) during free viewing of films, and inter-subject correlation analysis (ISC) was used to assess similarities in the spatiotemporal responses across viewers’ brains during movie watching. Our results demonstrate thatsome films can exertconsiderable control over brain activity and eye movements. However, this was not the case for all types of motion picture sequences, and the level of control over viewers’ brain activity differed as a function of movie content, editing, and directing style. We propose that ISC may be useful to film studies by providing a quantitative neuroscientific assessment of the impact of different styles of filmmaking on viewers’ brains, and a valuable method for the film industry to better assess its products. Finally, we suggest that this method brings together two separate and largely unrelated disciplines, cognitive neuroscience and film studies, and may open the way for a new interdisciplinary field of “neurocinematic” studies.

References:

Hasson U, Yang E, Vallines I, Heeger DJ, Rubin N, A hierarchy of temporal receptive windows in human cortex, Journal of Neuroscience, 28:2539-2550, 2008.

Hasson U, Landesman O, Knappmeyer B, Vallines I, Rubin N, Heeger DJ, Neurocinematics: The neuroscience of films, Projections: The Journal for Movies and Mind, 2:1-26, 2008.

source:lectures08/Heeger-neurocinematics.pdf

Distributed learning and memory in a social animal: Interactions among neural and social networks within a honey bee colony
Brian Smith, Arizona State University

Neural networks in the insect Antennal Lobe and in its functional mammalian analog the Olfactory Bulb transform afferent sensory information about odors. This transformation takes place by way of excitatory and at least two different types of inhibitory connectivity in the networks of the AL. Recently it has been demonstrated that the transformation is dynamic in that the activity evolves through a sequence of patterns over time. This may push initially similar activity patterns farther apart to improve odor discriminability. Our bioimaging and behavioral studies of the honey bee support this hypothesis. However, one other type of modulation occurs in AL and OB networks. It involves feedback related to the presence or absence of reinforcement associated with odors. In the honey bee, a well studied pathway involves a modulatory neuron – called VUMmx1 – that releases the biogenic amine octopamine in several association centers of the brain. VUM is stimulated by sucrose presentation to the bee’s mouthparts, and disruption of an important octopamine receptor in the downstream pathway disrupts associative olfactory learning. It is unclear how reinforcement-related activity of VUMmx1 changes the representation of odors in the AL. We therefore performed imaging studies in animals conditioned to discriminate binary mixtures that differed in the ratios of the two components. When compared to naïve animals, we show that the temporal patterns differ in the two groups such that the CS+ and CS- mixtures are pushed farther apart in the conditioned animals. Use of an antibody generated against the octopamine receptor used for RNAi shows that the inhibitory networks are major target for octopamine-driven plasticity. Furthermore, using a different conditioning protocol that involves repeated stimulation with an odor without reinforcement, the pattern for a mixture containing that odor is biased away from that odor. This result implies the existence of another as yet unidentified modulatory pathway for plasticity in the AL. In conclusion we suggest that one function of plasticity in early olfactory processing is to tune the AL to filter out less relevant odors, that is, those not associated with reinforcement. The networks become biased to pass through with increased reliability odors that are important at any given point in time. We propose that this plasticity is important for processes of memory consolidation in downstream pathways such as the mushroom bodies.

source:[to be uploaded to SVN]

The mind of a worm
Ernst Niebur, Johns Hopkins University

This is a short introduction into the structure and function of the nematode nervous system, with particular emphasis on Caenorhabditis elegans. We will discuss the biomechanics of undulatory locomotion and present a computational model of somatic motor control. Central pattern generations are unlikely to underlie control of undulatory locomotion in nematodes. Instead, we propose that the body shape is instantaneously read out and used to generate the appropriate spatio-temporal muscle activation patterns needed for locomotion in either direction.

source:lectures08/nematodes-overview-lecture.pdf

Commercializing Auditory Neuroscience
Lloyd Watts, Audience

Neuroscience knowledge and computing technology has advanced to the point that it is now possible to build realistic, high-resolution, real-time models of significant portions of the brain. My work has focused on extracting the principles of operation of key processing modules in the auditory pathway, in active collaboration with neuroscientists, so as to construct a plausible working model of this important sensory system. I will demonstrate real-time, high-resolution models of the Cochlea, Cochlear Nucleus, spatial localization systems based on the Superior Olivary Complex and Inferior Colliculus, and polyphonic pitch detection based on the ICC and Auditory Cortex, developed by the team at Audience.

I will also demonstrate Audience's two-microphone noise reduction system developed for the cell-phone market, based on the principles developed above.

I will report on algorithmic challenges and accomplishments (data representations, signal processing strategies, and how to validate that our models really are doing what the brain is doing), commercial and fundraising challenges (how to turn a long-term science project into a viable business with an exciting and predictable return on investment based on commercially successful products), and the long- term significance of building a realistic model of brain function beginning with the auditory pathway.

source:lectures08/Lloyd_Watts.pdf

Configurable Analog Signal processing
Paul Hasler, GA Tech

We present the potential of using Configurable Analog Signal processing techniques for impacting low-power portable neurmorphic applications. The range of analog signal processing functions available results in many potential opportunities to incorporate these analog signal processing systems with digital signal processing systems for improved overall system performance. Programmable, dense analog techniques enable these approaches, based upon programmable transistor approaches. We show experimental evidence for the factor of 1000 to 10,000 power efficiency improvement for programmable analog signal processing compared to custom digital implementations. We describe past and recent results on Large Scale Field Programmble Analog Arrays (FPAA), particularly devices that will be used during the workshop as part of other workgroups.

source:[to be uploaded to SVN]

Reverse-Engineering The Fly (An Engineer's Approach to the Fly Visual System)
Charles M Higgins, ECE / Neurobiology, University of Arizona

In this talk, I motivate why insects are excellent organisms for the study of the neuronal basis of behavior, and describe computational neuroethology experiments in my lab focusing on visual navigation. I provide background on the fly eye and the neurons of its visual system. My talk focuses on two specific projects, one in which behavioral experiments on honeybees have led to a neuronal model of visual navigation, and a second in which the neurons and muscles of a hawkmoth are used to control the motion of an autonomous robot.

source:lectures08/Higgins_Telluride08_talk.pdf

Every spike is sacred
John Harris, University of Florida

In this talk we point out the power and bandwidth savings due to temporal codes vs. rate codes for both neuromorphic and biological systems. We review three separate projects ongoing at the University of Florida to study the role of these spike codes. First, the time-to-first-spike imager is discussed. In this chip, extremely wide dynamic range is obtained at video rate by only transmitting one spike per pixel per frame. Other rate coding systems require many orders of magnitude more spikes to transmit the same information and cannot achieve the same dynamic range. Second we discuss a neural implant project where a special temporal code is used to reduce the power and bandwidth of the frontend. The integrate and fire representation allows one channel to be viewed at high resolution while the other channels use a specially designed pulse-based feature extraction technique. Finally we review our work in biologically-inspired speech recognition where the degree of phase locking in the auditory nerve is used as a feature. Experiments with spoken vowels show improved recognition rates in high noise situations. Through all three projects we see that the number of spikes can be dramatically reduced to achieve ultra low power and bandwidth. Also, the detailed timing of each spike can convey valuable information, unlike in rate codes.

source:lectures08/every_spike_is_sacred.pdf

Motion in Flight
Shih-Chii Liu, INI, UZH/ETHZ

The on-board requirements for small, light, low power sensors and electronics on autonomous micro aerial vehicles limit the computational power and speed available for processing sensory signals. One useful visual signal for such platforms is optical flow information which can also be used for distance estimation. Custom Very Large Scale Integrated (VLSI) sensor chips which perform focal-plane motion estimation are beneficial for such platforms because of properties including compactness, continuous-time operation, and low-power dissipation. I will give an overview of analog VLSI motion detection chips that have been designed over the last 20 years and contrast the pros and cons of the different algorithms implemented on these chips. I will also describe experiments on robot navigation using these chips.

source:lectures08/shihtelluridemotion08.pdf

Why do android sheep sing 'Electric Dreams'? (or The Market Evolution of Biomorphic Design)
Mark Tilden, Wowwee

Since 2001 over 20 million entertainment Biomorphic robots have been sold, consituting one of the most successful “toy experiments” in history. Though primarily for kids, these robots were also designed with education and inexpensive science hacking in mind. The inventor will showcase his history in China, the basics of robotic mass-manufacture, and show secrets built in to all robots that can be exploited for further neuromorphic experiments.

source:[to be uploaded to SVN]

Beverly: A Robot That Discovered what Caregivers Look Like
Javier R. Movellan, Temporal Dynamics of Learning Center, University of California, San Diego

There is strong experimental evidence that newborn infants orient towards human faces, even cartoon-like versions of faces. Most researchers agree that these preferences are probably innate and not learned. This is based on the belief that it is plausible that a few minutes of interaction with the world would provide enough information to learn preferences for abstract, cartoon-like versions of faces. Here we explore the computational plausibility of the "Rapid Learning Hypothesis".

To this end we built a robot (BEVERLY) with the appearance of a human baby and encouraged members of our laboratory to sporadically interact with it. BEVERLY was endowed with a machine learning algorithm for discovering visual concepts using images and with a system that could detect contingencies between auditory and visual signals. The contingency detection system provided the training labels to the visual object discovery system.

In less than 6 minutes of interaction with the world the robot learned to locate people in novel images. In addition, it developed a preference for drawings of human faces over drawings of non-faces, even though it had never been exposed to such abstract face drawings before. During learning, the baby robot was never told whether or not people were present in the images, or whether people were of any particular relevance at all. It simply discovered that in order to make sense of the contingencies observed between images and sounds it it was a good idea to develop feature detectors that discriminate well the presence or absence of people.

The results illustrate that visual preferences of the type typically found in human neonates can be acquired very quickly, in a matter of minutes. Previous studies that were thought to provide evidence for innate cognitive modules may actually be evidence for rapid learning mechanisms in the neonate brain.

source:lectures08/MovellenNeuromorph08.pdf

Decoding Neural Activity at Multiple Spatial and Temporal Scales: The Science and Engineering of "Mind Reading"
Paul Sajda, Columbia University

From both a scientific and engineering perspective an open and exciting question is how best to infer the stimulus (sensory systems) or intent (cognitive systems) of an organism from from its neural activity--i.e. "mind reading". There are many ways to potentially measured such activity all of which, given current acquisition methods, have their own spatial and temporal scales; spikes, field potentials, scalp potentials, magnetic fields and hemodynamic responses, In this talk I will describe a general framework of spatio-temporal linear filtering which can be applied to all these classes of neural activity for decoding the stimulus and or intent of the organism. I will focus on decoding of neural activity measurable non-invasively via scalp electrodes--i.e. the electroencephalogram (EEG). I will first describe the basic approach of learning spatio-temporal linear projections of the neural data and then describe some ways we are using these "linear recipes" to develop new types of brain-computer interfaces as well as study basic questions in perceptual decision making.

source:lectures08/SajdaNeuromorph08.pdf

Scalable neuromorphic spike-based learning systems
Gert Cauwenberghs, University of California, San Diego

The quest to build machines that think and act like humans is impeded by the massive complexity of the human brain and by our limited knowledge of how the brain functions. Despite significant advances towards naturally intelligent computing using neuromorphic engineering approaches to computer architecture, the majority of electronic neural systems existing to date exhibit primitive function and serve as a proof of concept in modeling isolated parts of the brain and the nervous system. To scale up the functionality of these systems towards brain-like computing, several researchers have adopted an event-based spiking representation to interface multi-chip neuromorphic processors, sensors, and actuators. I will present our work on scalable neural architecture with reconfigurable connectivity and dynamic synaptic plasticity, which extends to large systems approaching the computational bandwidth and efficiency of mammalian cortex, implemented using custom designed silicon microchips. I will also address challenges in configuring and training the hardware and highlight some promising approaches that employ hierarchical organization and the ubiquitous availability of human-assisted training data over the internet.

source:lectures08/CawenberghsNeuromorph08.pdf

Modeling development of cortex in 3D: from precursors to circuits
Rodney Douglas, University of Zurich/ETHZ

abstract to be provided

source:[to be uploaded to SVN]

Investigations at the Interface of Morphology, Evolution and Cognition
Josh Bongard, University of Vermont

In the first part of the talk I will introduce a software tool our group has created, MorphEngine. MorphEngine allows for the simulation of physically realistic robots operating in 3D virtual environment. I will demonstrate how to interface MorphEngine with neuromorphic hardware to realize software robots with hardware brains, rather than the usual hardware robots with software brains. In the second part of the talk I will present some experiments in which MorphEngine has been been used for evolving robot morphologies and how it enabled a physical robot to model itself. I will argue that such tools can be used to investigate the ultimate mechanisms of cognition: what evolutionary pressues led to the development of particular cognitive structures and processes, and how can we simulate those processes to automatically create intelligent machines?

source:lectures08/Josh_Bongard.pdf

A new view of competition in the central nervous system
Mike Stryker , University of California, San Francisco

abstract to be provided

source:[to be uploaded to SVN]

What happens as signals propagate through the object recognition pathway?
Nicole Rust, Massachusetts Institute of Technology

Current hypotheses suggest that the pathway underlying object recognition, the ventral visual stream, represents images at early stages in terms of their local structure. This local representation is then transformed at later stages into a representation that encodes objects based on specific global configurations. Neurons at the highest stages of this pathway, anterior inferotemporal cortex (IT), are reported to respond with high specificity to complex objects such as faces and maintain their stimulus preferences over identity preserving transformations (such as shifting or rescaling the image). However, this account of IT neurons is difficult to reconcile with other accounts that suggest IT cells typically respond to a large fraction of natural images and are better described as broadly tuned. To arrive at a coherent account of “what” is happening as signals propagate through the ventral visual pathway, we recorded the responses of neurons in mid-level visual area V4 and high-level visual area IT while monkeys performed an object detection task. In contrast to traditional single-neuron approaches, we probed the ability of populations of V4 and IT neurons to represent different classes of images with natural and manipulated statistics using non-parametric, linear classifier read-out techniques. Specifically, we probed the sensitivity of each population to local and global image structure, the tolerance of each population to identity-preserving stimulus transformations, and the breadth of tuning of each population to natural images. We found evidence to support the hypothesis that the representation of images in the ventral stream is indeed transformed from “local” early on to “global” at later stages and the representation becomes more tolerant to identity-preserving image transformations as signals travel through the pathway. Surprisingly, we also found that distributions of the fraction of natural images that activate neurons (often referred to as sparseness) were indistinguishable between V4 and IT and that most neurons in both areas were broadly tuned. As a reconciliation of these apparently contradictory findings, we found that equivalent sparseness values were correlated with more “global” preferences as well as higher tolerances in IT as compared to V4. Taken together, these results suggest that as signals propagate through the visual system, neurons increase their selectivity for global image features and, at the same time, neurons increase their tolerance for the position and scale of those features; the rates at which these two factors increase are set such that individual images activate a constant fraction of neurons at each level of visual processing. Consistent with the observation that the structure of cortex is roughly identical regardless of where it sits in the hierarchy, we speculate that conservation of a broadly distributed coding scheme is an optimal use of resources in equipotential cortex.

source:[to be uploaded to SVN]

Responding to Rare Events in Cognitive and Engineering Systems
Misha Pavel, Oregon Graduate Institute

The detection of “novel” or “rare” stimuli and the subsequent generation of the appropriate response, e.g., classification, is a fundamental property of any intelligent system. The generation of responses to such stimuli by natural and artificial systems has been the focus of extensive research in cognitive science (psychology), neuroscience, computer science and engineering. In this presentation we review several notions of “rare” inputs and then focus on an important interpretation in terms of “incongruent” stimuli. In order to provide a more rigorous definition of “incongruent” stimuli we first define the notion of a label hierarchy and show how a partial order on labels can be deduced from such hierarchies. For each stimulus, we compute its posterior probability in two or more different ways, based on adjacent levels (according to the partial order) in the label hierarchy. A rare or incongruent stimulus is one for which the posterior probability computed at a more specific level (in accordance with the partial order) is smaller than the probability computed at a more general level. In addition to the probability discrepancy, a rational response of an effective cognitive system is determined using an estimate of the importance or utility of the consequences of the response. We show how this definition captures a number of interesting examples of rare events, including the out-of-vocabulary problem in speech recognition, and the detection of images with incongruous collections of parts.

source:lectures08/Misha_Pavel.pdf

What happens after signals propagate through the object recognition pathway?
Barry Richmond , National Institute of Health

abstract to be provided

source:[to be uploaded to SVN]

Unsupervised learning in visual cortex
Terry Sejnowski, Salk Institute

abstract to be provided

source:[to be uploaded to SVN]

Temporal and Top-Down Processing in Audition (and why every bit is not sacred :-)
Malcolm Slaney, Yahoo! Research and Stanford CCRMA

I'd like to discuss pitch, speech and auditory scene analysis in my talk. Clearly top-down processing is important in our understanding of our auditory environment. I'll discuss the role of temporal processing, showing examples. Most importantly, I will show how much progress we have made towards including And at least in the Internet world, I'll talk about the paradigm shift where every bit is NOT sacred.

source:lectures08/PitchTopDownNeuromorp2008.ppt.pdf

The Von Economo Neurons in Human Neuropsychiatric Illnesses: Recent evolutionary change carries the risk of increased vulnerability to disease.
John Allman, California Institute of Technology

The Von Economo Neurons (VENs) are large bipolar neurons located in layer 5 of anterior cingulate and fronto-insular cortex. In primates, the VENS are present in only the great apes and humans and thus constitute a recent phylogenetic specialization. Functional imaging and neuropathological data from patients with fronto-temporal dementia strongly implicate these brain structures, and the VENs in particular, in error recognition and the initiation of error correcting responses especially in social behavior. The VENs express the gene DISC1 (disrupted in schizophrenia) which controls dendritic morphology and axon guidance. DISC1 has undergone substantial recent positive selection since the divergence of the hominoid line from monkeys, and the part of the gene which has undergone the greatest divergence influences dendritic morphology and may be responsible for the distinctive shape of the VENs. The evolved DISC1 appears to be related to enhanced cognitive functioning in humans. Abnormalities in DISC1 are implicated not only in schizophrenia, but also in depression and autism.

source:[to be uploaded to SVN]

Dimensionality Reduction and Learning
Steve Zucker, Yale

abstract to be provided

source:[to be uploaded to SVN]

Auditory Cognition: Encoding task performance, rules, and objectives in auditory and prefrontal cortex
Shihab Shamma, UMd

This talk will review the role of behavior and attention in inducing plasticity in auditory cortical STRFs that reflect task performance and objectives as an animal learns to perform auditory detection and discrimination tasks. I shall also describe the dependence of responses in cortical frontal areas on behavioral task rules and stimulus meanings.

source:[to be uploaded to SVN]

Attention, please!
Ernst Niebur, Johns Hopkins University

In order to function in a complex and changing environment, organisms need to collect sensory information from a multitude of sensors. At a given time, not all of the information is, however, relevant for making behavioral decisions and it would be wasteful (and thus harmful from an evolutionay point of view) to process the irrelevant information in detail. One of the most important parts of sensory perception is therefore the intelligent triage of relevant from irrelevant information, with the goal of routing the former towards dedicated higher-level processing stages, and to discard the latter. This process is commonly called selective attention. The lecture will give an overview of computational approaches for understanding how two problems are solved. The first is to make the decision of which information to keep and which to discard. The second is to find a representation which allows the nervous system to 'mark' the attended stimuli and thus to differentiate between attended (selected) and unattended stimuli.

source:lectures08/AttentionTalkTelluride2008.pdf

Speech Information Extraction by Humans and by Machines
Hynek Hermansky, IDIAP Research Institute, Martigny

Human decoding of message in speech is quite effortless. Machine still has significant difficulties in emulating this process. Part of the reason may be that machines aim at getting right as many words as possible while humans (and for that matter any organism in general) aim at information in the signal. Information-carrying ability of the received item is inversely proportional to its probability of the occurrence. Successful biological organisms are able to detect and to identify the unexpected information sources rather well. Machine learning approaches to speech recognition emphasize importance of frequent lexical items at the cost of the information-rich unexpected items. We recall results of some early experiments in human recognition of words in context and in isolation. These results suggest the need for alternative architecture of the speech recognition machine which would involve comparisons of predictions from parallel combinations of classification streams with varying degrees of prior constraints. This approach in its turn calls for substantial improvements of the bottom-up recognition techniques that have been rather neglected in the current top-down dominated recognition. This then leads us to paying attentions to the way the acoustic signals are processed in mammalian hearing system. A close relation between the way the message is coded in temporal properties of the speech signal and the temporal properties of the human cognitive system is demonstrated on well-known masking experiments. Finally, we describe an engineering system that employs frequency-localized and rather long segments of spectro-temporal plane of human speech. These segments are used for deriving information about the underlying speech sounds. Finally, we argue for the qualitative consistency of such a system with known properties of mammalian auditory cortex.

source:lectures08/hynek_08.pdf

Neuromorphic Audio Processing
David Anderson, GA Tech

Mel-frequecy Cepstral Coefficients (MFCCs) have been used for decades for automatic speech recognition. Although the MFCC history begins in mathematical homomorphic signal processing, MFCCs evolved to be approximate biologically-inspired processing. Based on work performed by Sourabh Ravindran, we show that the performance of recognition and classification systems that use MFCCs can be significantly improved by using a new set of features that are more closely tied to human audition. These features are very similar to MFCCs but are shown to be much more robust in the presence of noise. We also show that using spatio-temporal receptive field features can be effectively used in audio classification experiments when coupled with adaptive boosting classifiers.

source:lectures08/Anderson_2008_audio.pdf

Hybrid neural networks at the neuron (cell) level
Sylvie Renaud, University of Bordeaux

Neural networks can be investigated at many levels : from the conductance to the cortex, they can also interact with artificial systems, in a configuration of « hybrid neural networks » . This talk presents basic principles and examples of hybrid neural networks where bio-electronics interactions exist at the cell or small networks level. The outline is as follows: 1) Neuro-electronics interfaces 2) The closed-loop approach ; 3) Single-cell level interactions ; 3) Network scale level interactions ; 4) Multi-modal systems. Sections 3 and 4 present examples of closed-loop systems and experimental results where hardware and software processing modules interact with in vitro recorded cells (intra-cellular or extra-cellular recordings). Section 4 presents hybrid systems combining 2 modalities for the sensing and control of the in vitro or in vivo bioware. The talk also addresses the issue the (interface+bioware) model: when existing, this models helps specifying the feedback control functions of the closed-loop, that are otherwise extracted from experimental results. As a conclusion, we claim that hybrid systems are powerful tools for the exploration of the dynamics of in vitro and in vivo neural (cellular) networks, and that these tools can be declined in a wide range of configurations, including multi-modal systems.

source:lectures08/SRenaud_Telluride08.pdf

System identification of plant and feedback in human postural control
John Jeka, University of Maryland

The advantage of multisensory information is often expressed in terms of signal enhancement or resolution. Multisensory signals are more easily detected than information from a single sensory source. However, when placed into the context of perception linked to the control of body movement, multisensory fusion has the additional advantage of resolving ambiguities between movements of different body components. Human self-orientation requires multisensory fusion not for signal enhancement, but for a collective characterization of multi-linked body dynamics. From a control theory perspective, the postural control system consists of two processes: the mapping from muscle motor commands to sway (the plant) and the mapping from sway to muscle motor commands (feedback), where we consider rectified EMG signals as a proxy for muscle motor commands. The plant depends on musculotendon dynamics and body dynamics. Feedback depends on sensory dynamics, sensory integration, and the control strategy. Using a linear approximation, each of these mappings can be characterized by an open-loop frequency response function (FRF). Closed-loop system identification can be used to identify both the plant and feedback open-loop FRFs. For weakly perturbed upright stance, the plant can be approximated as having one input and two outputs. The single input is a weighted sum of EMG signals that can be thought of as a control signal that specifies that coordinated activation of muscles. The two outputs are the angular deviations from vertical of the leg and trunk body segments. Since the plant has one input and two outputs, feedback has two inputs, the body segment angles, and one output, the control signal. In this study we examined how the identified plant and feedback FRFs can be used to test existing models of the postural control system and develop new models. For example, the fact that the plant has a single input is inconsistent with multi-joint optimal control models that have independently controlled actuators at each joint. Previously, we have used the identified plant FRF to develop a model of the plant as a double-link inverted pendulum with synchronously activated ankle and hip actuators. The EMG-to-torque mapping of each actuator is modeled by a second-order low-pass filter. In addition, each joint has intrinsic stiffness and damping. Optimal control theory makes a prediction about what type of feedback is optimal to control the plant given some performance index. We compared such predictions for various performance indices to the identified feedback FRF.

source:[to be uploaded to SVN]

AER Circuits, Systems, and Tools
Bernabe Linares-Barranco

AER (Address-Event-Representation) technology is showing a high potential for building sophisticated and powerful sensory/processing/decision/actuating neuromorphic sytems capable of operating in real time with real world sensory data. To date, AER based systems capable of hosting in the order of 50k neurons, 5M synapses have been developed, and which are capable of communicating 12Geps (events per second - action potentials). However, with present day technology it is quite realistic to achieve artificial 'Cortical Tissues' systems with millions of neurons, billions of synapses, and interchanging in the order of 1e14 AEs (Addres Events) per second. In this talk we will show some example AER building blocks, interfaces, and systems built to date. We will also discuss some circuit aspects, such as calibration and minituarizing interchip AER links. We will present ongoing work on developing a behavioral simulator for emulating large AER systems, assembled with user-defined AER modules. Real data obtained from present-day AER sensors can be used as stimuli. New cortical architectures can be tested and new AER processing modules can be forseen and tested before building them in actual hardware.

source:lectures08/Bernabe-aer-CST.pdf

Moving embodied and situated cognition upwards using Dynamic Field Theory
Gregor Schoener,Ruhr-Universität-Bochum

A seemingly simple motor act like playing soccer actually contains a lot of cognition: objects must be recognized, targets selected, working memory built and updated to guide visual exploration of the scene, action decisions must be made that are goal-oriented (quite literally). Conversely, cognitive tasks like planning the repair of a toaster become much simpler when acted out in reality and then involve a lot of sensory-motor tasks: visually exploring the toaster by moving it before ones' eyes, keeping track of the locations on the toaster as it changes pose, knowing how much force to use to open the screws, manipulating the compliant spring that may have become loose. The embodiment/situatedness stance postulates that understanding cognition requires understanding how cognitive processes are linked or are linkable to the sensory and motor surfaces, how cognition takes place in real-time, structured environments, and how behavioral history provides the context in which cognition takes place. The embodiment stance also entails seeking accounts of cognition that are based on fundamental neuronal principles. The talk sketches such principles, arguing for spatio-temporally continous neuronal dynamics (Dynamic Field Theory or DFT), as an interface between the neurophysics of individual, spiking neurons and the level of cognition and behavior. The basic mathematics of DFT is presented by reviewing four classes of dynamical systems, in which instabilities generate different classes of solutions and different modes of operations, which together support elementary forms of cognition such as detection and selection decisions, working memory, change detection, category learning and classification. The principles of DFT are illustrated by reviewing psychophysical experiments in which signatures of these neuronal dynamic mechanisms are observed. I will alsa sketch how simple "cognitive robots" can be constructed using these concepts.

source:lectures08/Schoener.pdf

Spiking Neurons and Noise
Jonathan Tapson, University of Cape Town

abstract to be provided

source:lectures08/Spiking_Neurons_and_Noise.pdf