Welcome to the page of Dr. Tim Christian Kietzmann. I am an Assistant Professor at the AI department of the Donders Institute for Brain, Cognition and Behaviour (Radboud University), and a Senior Research Associate at the MRC Cognition and Brain Science Unit (University of Cambridge). I investigate principles of neural information processing using tools from machine learning and deep learning, applied to neuroimaging data recorded at high temporal (EEG/MEG) and spatial (fMRI) resolution. Feel free to contact me with any questions or paper requests, and follow me on twitter (@TimKietzmann) for latest updates.

Research Interests

Cognitive Neuroscience meets Machine Learning. Our research group aims to understand the computational processes by which the brain and artificial agents can efficiently and robustly derive meaning from the world around us. We ask how the brain acquires versatile representations from the statistical regularities in the input, how sensory information is dynamically transformed in the cortical network, and which information is extracted by the brain to support higher-level cognition. To find answers to these questions, we develop and employ machine learning techniques to discover and model structure in high-dimensional neural data.

As a target modality, we focus on vision, the most dominant of our senses both neurally and perceptually. To gain insight into the intricate system that enables us to see, the group advances along two interconnected lines of research: machine learning for discovery in neuroscience, and deep neural network modelling. This interdisciplinary work combines machine learning, computational neuroscience, computer vision, and semantics. Our work is therefore at the heart of the emerging fields of neuro-inspired machine learning and cognitive computational neuroscience.

Newsfeed rss
Twitter Feed

Please RT!: Open post-doc position in my (brand new) group at Johns Hopkins BME working on data science and machine learning for neural imaging and data analysis. Please DM me for more info!

@jhasomesh Yes, absolutely. Back in the early days, a lot of stuff only appeared as MIT AI memos (and SAIL memos, etc.), so of course they were cited.

I also agree that a submission cannot be expected to cite papers that appeared on arXiv in the few weeks (months?) prior to the deadline.

To whom it may concern,

Get off Twitter and finish that overdue manuscript.

Sincerely yours, #OverlyHonestCollaborator

I'm trying to make a list of theories of neural coding:

1. First stage (retina) whitens
2. V1 does ICA
3. Later stages progressively decorrelate (V2 = textures etc)
4. Predictive coding
5. Dis-entangling (coding for linear readouts)

I know there's stuff I'm missing. What?

Load More...