Blog

Home

Welcome to the page of Dr. Tim Christian Kietzmann. I am an Assistant Professor at the AI department of the Donders Institute for Brain, Cognition and Behaviour (Radboud University), and a Senior Research Associate at the MRC Cognition and Brain Science Unit (University of Cambridge). I investigate principles of neural information processing using tools from machine learning and deep learning, applied to neuroimaging data recorded at high temporal (EEG/MEG) and spatial (fMRI) resolution. Feel free to contact me with any questions or paper requests, and follow me on twitter (@TimKietzmann) for latest updates.




Research Interests

Cognitive Neuroscience meets Machine Learning. Our research group aims to understand the computational processes by which the brain and artificial agents can efficiently and robustly derive meaning from the world around us. We ask how the brain acquires versatile representations from the statistical regularities in the input, how sensory information is dynamically transformed in the cortical network, and which information is extracted by the brain to support higher-level cognition. To find answers to these questions, we develop and employ machine learning techniques to discover and model structure in high-dimensional neural data.

As a target modality, we focus on vision, the most dominant of our senses both neurally and perceptually. To gain insight into the intricate system that enables us to see, the group advances along two interconnected lines of research: machine learning for discovery in neuroscience, and deep neural network modelling. This interdisciplinary work combines machine learning, computational neuroscience, computer vision, and semantics. Our work is therefore at the heart of the emerging fields of neuro-inspired machine learning and cognitive computational neuroscience.

Newsfeed rss
Twitter Feed

Building on success in computational neuroscience of modeling individual datasets with individual models, we think our field should take the next step: integrate many experimental results that together push models to explain entire domains of intelligence https://www.cell.com/neuron/fulltext/S0896-6273(20)30605-X

Join us for Neuromatch conference (Oct 26-30). We accept submissions from all areas of neuroscience. Every submission will be accepted as a talk. Our algorithms will facilitate great schedules. Working hard to be the most inclusive conference ever. http://neuromatch.io

Where and what we look at says a lot about the world but says a lot about us too 👀🧠
Nice!

Individual differences in visual salience vary along semantic dimensions | PNAS ⁦@PNASNews⁩ https://www.pnas.org/content/116/24/11687

Infants are *masters* of unsupervised learning. In goes a buzzing, blooming confusion and out comes understanding. How can that possibly be? Excited to see this review of 5 lessons for ML from developmental science with @LorijnSZ and @jeublanc. 187 refs boiled down to 5 TL;DRs https://twitter.com/arxiv_cscv/status/1307947738265346048

arXiv CS-CV@arxiv_cscv

The Next Big Thing(s) in Unsupervised Machine Learning: Five Lessons from Infant Learning http://arxiv.org/abs/2009.08497

🥳 Our paper is out today in Neuron!

New & improved. Details include how 2 record & stimulate deep 🧠 activity in humans during freely moving behaviors (🚶🏻‍♀️🕺🏻🚴🏾‍♂️). More studies with this to come out soon!

Link https://www.cell.com/neuron/fulltext/S0896-6273(20)30652-8

New video from the paper in @NeuroCellPress

Load More...