CT Perfusion Imaging Synthesis using Deep Generative Adversarial Networks
Background: Stroke is the 3rd leading cause of death in Canada. It is estimated that a typical patient loses 1.9 million neurons each minute in which an acute stroke is untreated. CT perfusion (CTP) imaging, which produces maps of blood perfusion parameters across the brain, is currently the mainstay imaging modality for acute stroke treatment selection and decision making. However, CTP has several drawbacks, namely: (1) treatment delay due to scanning and postprocessing time [1]; (2) the need to be processed by trained personnel; (3) vulnerability to corruption due to patient motion or scan mistiming [2]; and (4) increased radiation exposure [2]. Multiphase CT angiography (mCTA) is a recently proposed imaging technique which is used to visualize blood vessels and occlusions by means of injecting a contrast agent before imaging [3]. Unlike single-phase CTA, mCTA consists of multiple passes across the head during different venous phases, creating a coarse time-resolved picture of the circulation in the brain. Circulation patterns from mCTA are associated with and can be used to predict perfusion patterns on CTP maps [3]. Recently, generative adversarial networks (GANs) have been applied to medical image synthesis tasks, such as cross-modality transfer (e.g. T1 T2 MRI) [4]. Unlike traditional GAN applications, where the input to the network is a randomly sampled latent vector, GANs in medical imaging are often conditional; they take in some structured input (e.g. an image) before generating an output. Additionally, in contrast to many style-transfer problems using GANs, where images from the two domains are unpaired (e.g. generate an image of a Picasso painting in the style of Monet), style-transfer problems in medical imaging are often paired, in the sense that there is a ground-truth image as a target to be generated. In our case, the model would be conditioned on mCTA images, with paired CTP parameter maps from the same patient as the target. We will use a dataset of ~6000 patient CT scans retrospectively collected from the Sunnybrook stroke clinic over the last 10 years to train these models. If successful, these models have the potential to substantially reduce time-to-treatment and improve patient outcome prediction in the clinical acute stroke setting.
Project description: The student will be involved in the architectural selection/design and implementation of a GAN model for synthesis of CTP-like perfusion maps from mCTA and baseline (non-contrast) CT imaging data, as well as training and validation of the designed model. The student will work closely with our master’s student and other lab members working on the project and will be encouraged to make conceptual contributions to the methodology and design of the study/experiments. The primary objectives will be:
- Conduct a literature review of current/relevant GAN architectures and training schemes for conditional, paired image translation problems (with a focus on medical imaging).
- Propose a model and implement it using Pytorch (can be adapted from open-source projects), ensuring the model is compatible with our CT imaging data.
- Train the model and perform hyperparameter optimization.
- Work with our master’s student to propose a validation methodology, which may involve predicting other clinical outcomes from generated CTP maps.
Benefits and opportunities for the student:
- Hands-on experience understanding, developing and evaluating GANs using Pytorch.
- Authorship on resulting publications.
- Hands-on experience working with medical imaging data.
- Access to computational resources, through Compute Canada and our own GPU-equipped lab servers.
- Be part of a growing lab group with experienced researchers from diverse backgrounds.
Our lab: The student will be supervised by Dr. Maged Goubran, and will work closely with Lyndon Boone, a master’s student, and graduate of the Engineering Science program at U of T. Our lab, located at the Sunnybrook Research Institute, develops novel computational, machine learning & imaging tools to probe, predict and understand neuronal and vascular circuit alterations in neurological disorders, including Alzheimer’s disease, stroke and traumatic brain injury. It consists of a multidisciplinary team of engineers, computer- and neuroscientists and software developers, who are responsible for the development and implementation of tools to quantify various imaging markers of brain disease. As such, students will have the opportunity to gain not only computational research experience but also a diverse exposure to clinical and translational research and will be part of an inclusive and stimulative lab environment.
Eligibility/Requirements:
- Completion of CSC413 (Neural Networks and Deep Learning) or equivalent, or demonstrated proficiency in similar topics (Deep Learning, CNNs, GANs).
- Hoping to complete a CSC494/CSC495 project course or looking for volunteer research experience.
- Programming experience in Python.
- Experience with ≥ 1 deep learning framework such as TensorFlow, Pytorch is a plus.
- Bash/linux experience not necessary but will come in handy.
- Able to work 8-10 hours per week.
Contact information: If you are interested in being part of this research project, or if you have any additional questions, please contact our master’s student Lyndon Boone (lyndon.boone@mail.utoronto.ca) with “BrainLab Stroke/AI Research Project” in the subject line.
References:
[1] M. Goyal, B. K. Menon, and C. P. Derdeyn, “Perfusion Imaging in Acute Ischemic Stroke: Let Us Improve the Science before Changing Clinical Practice,” Radiology, vol. 266, no. 1, pp. 16–21, Jan. 2013, doi: 10.1148/radiol.12112134.
[2] A. Vagal et al., “Automated CT perfusion imaging for acute ischemic stroke: Pearls and pitfalls for real-world use,” Neurology, p. 10.1212/WNL.0000000000008481, Oct. 2019, doi: 10.1212/WNL.0000000000008481.
[3] B. K. Menon et al., “Multiphase CT Angiography: A New Tool for the Imaging Triage of Patients with Acute Ischemic Stroke,” Radiology, vol. 275, no. 2, pp. 510–520, Jan. 2015, doi: 10.1148/radiol.15142256.
[4] X. Yi, E. Walia, and P. Babyn, “Generative adversarial network in medical imaging: A review,” Medical Image Analysis, vol. 58, p. 101552, Dec. 2019, doi: 10.1016/j.media.2019.101552.