How do we perceive the three-dimensional (3D) structure of the world when our eyes only sense 2D projections like a movie on a screen? Estimating the 3D scene structure of our environment from a pair of 2D images (like those on our retinae) is mathematically an ill-posed inverse problem plagued by ambiguities and noise, and involving highly nonlinear constraints imposed by multi-view geometry. Given these complexities, it is quite impressive that the visual system is able to construct 3D representations that are accurate enough for us to successfully interact with our surroundings. A major area of research in the lab is devoted to understanding how the brain achieves accurate and reliable 3D representations of the world. A critical aspect of 3D vision is the encoding of 3D object orientation (e.g., the slant and tilt of a planar surface). By adapting mathematical tools used to analyze geomagnetic data (Bingham functions), we developed the first methods for quantifying the selectivity of visual neurons for 3D object orientation. Our work on this topic employs a synergistic, multifaceted approach combining computational modeling, neurophysiological studies, and human psychophysical experiments.
3D Vision
Multisensory Integration
Our visual system first encodes the environment in egocentric coordinates defined by our eyes. Such representations are inherently unstable in that they shift and rotate as we move our eyes or head. Click to enlarge However, visual perception of the world is largely unaffected by such movements, a phenomenon known as spatial constancy. Perception is instead anchored to gravity, which is why buildings are seen as vertically oriented even if you tilt your head to the side. This stability of visual perception is a consequence of multisensory processing in which the brain uses gravitational signals detected by the vestibular and proprioceptive systems to re-express egocentrically encoded visual signals in gravity-centered coordinates. Vestibular deficits can thus compromise visual stability, and the absence of gravity in space can cause astronauts to experience disorienting jumps in the perceived visual orientation of their surroundings. A second area of research in the lab investigates where and how the brain combines visual information with vestibular and proprioceptive signals in order to achieve a stable, gravity-centered representation of the world. Our work on this topic relies on a combination of computational modeling and neurophysiological studies.
Autism spectrum disorder (ASD) manifests heterogeneously across individuals. Differences in the expression of ASD can be qualitative (reflecting the presence or absence of certain characteristics such as a language impairment) or quantitative (reflecting the degree of symptom severity such as the endpoint of verbal ability). Using a multi-faceted approach incorporating studies with adolescents with ASD, neural imaging, and computational modeling we are testing the hypothesis that ASD heterogeneity arises as a consequence of altered divisive normalization in different brain areas and to varying degrees.
Neuro-Computational Underpinnings of Autism