Background:
Segmentation of blood vessels and neurons from two-photon microscopy (2PM) angiograms of brains has important applications in connectivity analysis and disease diagnosis. These analyses will assist in improving the understanding of network alterations at the whole-brain level in various diseases such as dementia and stroke. In recent years, deep convolutional neural networks (CNNs) have demonstrated great success in semantic segmentation problems in various fields including computer vision and medical imaging, achieving significantly better performances compared to classical machine learning algorithms. However, a major limitation of deep CNNs is the critical requirements of large datasets and expert annotations that require massive resources and are very time consuming and difficult to obtain. For example, in vessel/neuron segmentation in microscopy images, the process of manually segmenting these images to generate ground truth labels requires an expert to segment them which is very time-consuming. Numerous techniques have been employed to address the issue of large dataset required by CNNs which commonly incorporate pre/post processing techniques.
Project description:
The student will be involved in a project to implement and evaluate a novel method which relies on weakly-labeled data (weak-supervision) to overcome the need for large datasets, applied to vessel/neuron segmentation in microscopy images. Our current 3D implementation of the network is based on a Bayesian U-net++ architecture with some novel modifications to segment and generate uncertainty maps. To train and evaluate this method 11 annotated data are available with a 512x512x96 resolution. The data will be split into training/validation/test datasets. The student will be involved in data preprocessing, deep learning model training, helping with implementation of the novel post processing technique, as well as evaluating the model performance.
The primary objectives will be:
● Training Bayesian weakly-supervised CNNs to generate initial segmentation and uncertainty maps
● Implementing the uncertainty-guided post processing and training to improve segmentation results
The student will be supervised by Dr. Maged Goubran (https://medbio.utoronto.ca/faculty/goubran). Our lab, located at the Sunnybrook Research Institute, develops novel computational, machine learning & imaging tools to probe, predict and understand neuronal and vascular circuit alterations in neurological disorders, including Alzheimer’s disease, stroke and traumatic brain injury. It consists of a multidisciplinary team of engineers, computer- and neuroscientists and software developers, who are responsible for the development and implementation of tools to quantify various imaging markers of brain disease. As such, students will have the opportunity to gain not only computational research experience but also a diverse exposure to clinical and translational research, and will be part of an inclusive and stimulative lab environment.
Opportunity/experience that the student will gain:
● hands-on experience with Artificial Intelligence and Deep Learning
● hands-on experience with implementation and evaluation of the state-of-the-art (SOTA) methods in deep learning
● opportunity to write a conference abstract and involvement in resulting papers
● learning cutting-edge data-driven techniques in image processing
● exposure to 3D multi-label microscopy dataset
● journal club discussions and scientific paper reading
● engagement with other students and projects in the lab
● access to computational resources needed to accomplish the aims
Eligibility:
● Programming experience (e.g. Python, Matlab)
● Knowledge of computer vision and deep learning
● Experience with one deep learning frameworks such as TensorFlow, PyTorch and Keras is a plus.
● Able to work 8-10 hours/week during the January-April term with potential for extension.
● CSC494H1/95 project or volunteer/research experience
Contact information:
If you are interested in being part of this exciting research project or if you have additional questions, please contact our image processing software developer Parisa Mojiri: parisa.mojiri@sri.utoronto.ca.