Normative Modelling in Psychiatry

It is challenging to successfully train deep learning classifiers when the researcher only has access to small datasets or when the number of features of the input data is large. Both of these are the case for neuroimaging data. It is rare to find datasets with more than a couple thousand examples and this number is even lower for datasets of specific brain disorders. One different approach that bypasses this issue is building normative models of the brain and measure these disorders as specific out-of-distribution examples in a process of anomaly detection. One effective way to build normative models is using auto-regressors, that learn to map inputs to outputs while reducing the data in a middle bottleneck layer. This leads to the model reducing the large feature number of the data to a more manageable number that can be studied as a common distribution.

In a first project I contributed to, we developed an adversarial auto-encoder to distinguish mild-cognitive development and Alzheimer’s disease and compared the results to classical predicitve models working on the original data.

Paper Code

In a second project, we are using state-of-the-art VQVAEs to reduce brain data into a quantized latent space and using transformers to measure the likelihood of the given distribuiton of the data. This is being used to identify psychiatric disorders. This work is still in development

Pedro F da Costa
Pedro F da Costa
PhD Researcher

Pedro is interested in the applying Machine Learning to real-life problems. He uses generative models and classical machine learning algorithms to create better tools for Neuroscience research.

Related