top of page
Image by Scott Webb

Our research

Below, you can find descriptions of some of the research lines we are currently pursuing with great excitement!

Our mission

With our research, we want to understand how our brains create our rich and fascinating visual experiences.

Image by Panos Sakalakis

Explaining individual differences in natural vision

Everyone sees the world with their own eyes - and their own brains. One reason why perception is different across people is that we all are exposed to different environments, and thus develop personalized prior expectations about what specific environments should look like. In our research, we probe people's expectations with creative reporting tasks (like drawing) and then use this information to test perception and neural processing in artificially generated environments.

This research is funded by the ERC Starting Grant PEP ("Personalized priors: how individual differences in internal models explain idiosyncrasies in natural perception").


Predictive feedback in visual processing

The world around us is complex and characterized by constant change. To meet the challenge of efficient real-world vision, our brain dynamically monitors and directs sensory processing: predictions about future states of the world guide how information is integrated across space and time, and how missing information is swiftly interpolated. All these functions require feedback projections in cortex, from higher levels to lower levels of the visual hierarchy. We study such feedback flows during naturalistic tasks and how rhythmic brain processes play a crucial role for simultaneously and efficiently transmitting feedforward and feedback information. 

This research is funded by the DFG in the context of the SFB/TRR135 "Cardinal Mechanisms of Perception".

Image by Mitchel Boot
Image by moren hsu

Real-world statistics and efficient natural vision

Our natural environments are not random collections of things: They are rather characterized through predictable distributions of information across space: For instance, the objects in your living room and the objects in my living room will be placed similarly based on their physical properties, their functions, and their relations with each other. In this line of research, we ask how the human brain has efficiently adapted to such recurring regularities in the world, and how the resulting visual system is optimally equipped for processing complex, but meaningfully structured environments.

This research was funded by the DFG from 2018 to 2021 in the project "Objects in Scenes".


Perceiving beauty in the world around us

Whether it's a scenic sunset, breathtaking architecture, or an attractive face - our lives are full of moments in which we perceive beauty in our surroundings. How is this feeling of beauty generated by our brains? And how does it arise from the computations performed in our visual system? In this line of research, we use neuroimaging techniques and computational models to predict which visual inputs humans find attractive and how such judgments vary with individuals' personal brain architecture.

Image by Jack B

This research is funded by the DFG in the project "Resolving the neural dynamics underlying aesthetic visual experiences".

Image by Hobi industri

Categorization in visual and frontal brain systems

In daily live, our visual system needs to distill meaning from highly complex sensory inputs that can take a practically infinite number of forms. To do so, vision needs to categorize inputs: a subspace of the many different inputs will correspond to the category of shoes. In this project, we test to which extent crosstalk between the visual cortex and frontal brain systems is needed for effective categorization processes to take place. To answer this question, we use a combination of neuroimaging methods and TMS neurostimulation that allows us to temporally interfere with neural activity in visual or frontal brain systems.

This research is funded by the DFG in the project "Probing prefrontal influences on the emergence of visual category representations".

bottom of page