Blog

Home

Welcome to the page of Dr. Tim Christian Kietzmann. I am an Assistant Professor and Associate PI at the AI department of the Donders Institute for Brain, Cognition and Behaviour (Radboud University), as well as a Senior Research Associate at the MRC Cognition and Brain Science Unit (University of Cambridge). I investigate principles of neural information processing using tools from machine learning and deep learning, applied to neuroimaging data recorded at high temporal (EEG/MEG) and spatial (fMRI) resolution. Feel free to contact me with any questions or paper requests, and follow me on twitter (@TimKietzmann) for latest updates.




Research Interests

Cognitive Neuroscience meets Machine Learning. Our research group aims to understand the computational processes by which the brain and artificial agents can efficiently and robustly derive meaning from the world around us. We ask how the brain acquires versatile representations from the statistical regularities in the input, how sensory information is dynamically transformed in the cortical network, and which information is extracted by the brain to support higher-level cognition. To find answers to these questions, we develop and employ machine learning techniques to discover and model structure in high-dimensional neural data.

As a target modality, we focus on vision, the most dominant of our senses both neurally and perceptually. To gain insight into the intricate system that enables us to see, the group advances along two interconnected lines of research: machine learning for discovery in neuroscience, and deep neural network modelling. This interdisciplinary work combines machine learning, computational neuroscience, computer vision, and semantics. Our work is therefore at the heart of the emerging fields of neuro-inspired machine learning and cognitive computational neuroscience.

Newsfeed rss
Twitter Feed

A new benchmark for human-level concept learning and reasoning. Humans beat #AI hands down! Shows gaps with current #DeepLearning meta/few-shot learning.

@NeurIPSConf @NVIDIAAI @wn8_nie @yukez @ZhidingYu @abp4_ankit

Blog: https://developer.nvidia.com/blog/building-a-benchmark-for-human-level-concept-learning-and-reasoning/

Paper: https://papers.nips.cc/paper/2020/file/bf15e9bbff22c7719020f9df4badc20a-Paper.pdf

Interesting paper on learning (sparse) visual features without explicit categorical feedback by @vkakerbeck @konigpeter Kühnberger and @PipaGordon

https://www.sciencedirect.com/science/article/pii/S0893608020303890

We're recruiting! Please RT!!!

https://mcgill.wd3.myworkdayjobs.com/McGill_Careers/job/Montreal-Neuro/Post-Doctoral-Researcher_JR0000006076

Specifically, myself, @NeuralEnsemble, and Doina Precup are looking to hire a postdoc for the HIBALL project: https://bigbrainproject.org/hiball.html

RigL is a new algorithm for training sparse neural networks. Instead of pruning a pre-existing dense network, it dynamically builds one during training without sacrificing accuracy relative to traditional approaches. Learn how it’s done at https://goo.gle/2ZJryiU

New Jupyter notebook and paper in PNAS: "Controversial stimuli: Pitting neural networks against each other as models of human cognition". https://doi.org/10.1073/pnas.1912334117 Tweet thread... 1/9

Load More...