Prof. Daphna Buchsbaum’s lab (in Psychology) is looking for someone with software development experience — especially with Android devices — and an interest in machine learning and computer vision, to build novel applications to run experiments on mobile devices.
Job Description
The candidate will be responsible for improving and testing an existing app that uses computer vision to recognize the number, location and features of rigid objects from pre-recorded video data, and turn it into a full-fledged, high-performance tablet-based application that recognizes objects from a fixed camera in real-time, on a mobile device, and can be used in experiments with children. The current version is a proof of concept developed on a desktop, we would like the student to help port it to a mobile platform and improve accuracy and real-time performance.
Other Responsibilities May Include
- Assisting with the development of web and computer-based experiments on causal learning.
- Using Qualtrics, Inquisit or PsiTurk software to help design online questionnaires and interactive studies to be presented online via Amazon’s Mechanical Turk.
- Developing both experiment web interface and underlying software for displaying stimuli.
- Building database-backed systems for managing experimental data.
- Writing scripts to preprocess and clean data.
Motivated students, particularly those with a background in machine learning and/or statistics or bayesian modeling, may be given the opportunity to help develop computational models of cognition relevant to these experiments.
Desirable Skills and Experience
- Previous Android app development experience is essential.
- Previous computer vision and/or machine learning experience is highly desirable.
- Web development experience is highly desirable (specify platform and languages).
- Other programming experience, especially with Python, Matlab, or R, is a plus.
How to Apply
This opportunity is open both as a work-study paid position and as an independent studies project for course credit (there is enough work for more than one person). If you are interested, please reach out directly to Prof. Buchsbaum (buchsbaum@psych.utoronto.ca
) and CC her lab manager (manager.buchsbaum@gmail.com
).
Lab Overview
We live in a world rich with causal structure. Events do not just occur randomly around us, they result from causal relationships — rain falling makes the ground slippery, flipping a switch makes the light turn on, turning a doorknob makes the door open. From learning to flip a light switch to using a remote control, as children grow up a major challenge they face is uncovering the world’s causal structure, including understanding the causes and consequences of other people’s behaviour. How do children learn these kinds of causal relationships, especially when the world presents them with sparse, ambiguous data or with multiple, conflicting sources of evidence? Are these sophisticated abilities unique to humans, or are they shared with other animals?
Our lab aims to answer these questions, using experimental and computational techniques to understand children’s causal and social reasoning abilities. By focusing on social and causal learning, we can address one of the core questions of cognition: How do humans construct sophisticated representations from relatively simple percepts, and how do these cognitive abilities develop?