Josh McDermott, PhD, Massachusetts Institute of Technology, will give a talk on "Understanding Audition Via Sound Analysis and Synthesis."
Humans infer many important things about the world from the sound pressure waveforms that enter the ears. In doing so we solve a number of difficult and intriguing computational problems. We recognize sound sources despite large variability in the waveforms they produce, extract behaviorally relevant attributes that are not explicit in the input to the ear, and do so even when sound sources are embedded in dense mixtures with other sounds. This talk will describe my recent work investigating how we accomplish these feats. The work stems from two premises: first, that understanding perception requires understanding real-world sensory stimuli and their representation in the brain, and second, that a theory of the perception of some property should enable the synthesis of signals that appear to have that property. Sound synthesis can thus be used to probe phenomena inaccessible to conventional experimental methods. I will discuss two related strands of research along these lines, one addressing the perception of sound textures (as produced by rain, swarms of insects, or galloping horses) as a window into the auditory system, synthesizing textures from statistics of biological sound representations as tests of the perceptual relevance of different acoustic measurements. The second strand uses naturalistic synthetic sounds to reveal new aspects of sound segregation. Together they indicate that simple statistical properties of auditory representations capture a surprising number of important perceptual phenomena.