Prof. Daphna Buchsbaum’s lab (in Psychology) is looking for someone with software development experience — especially with Android devices — and an interest in machine learning and computer vision, to build a novel application to run experiments on mobile devices. We are looking for 1-2 students, with both work-study and independent study options available.
Job Description
The candidate(s) will be responsible for improving and testing an existing app that uses computer vision to recognize the number, location and features of rigid objects from pre-recorded video data, and turn it into a full-fledged, high-performance tablet-based application that recognizes objects from a fixed camera in real-time, on a mobile device, and can be used in experiments with children. The current version can detect objects, and has an initial version of an Android-based user interface, but requires some performance enhancements, additional features, and testing.
Candidates may choose to focus on specific aspects of the system, including:
- Tablet-based machine vision (based on a combination of deep learning and task-specific algorithms).
- Android UI design and implementation.
- System integration and testing, with an HCI element.
Other Responsibilities May Include
- Assisting with the development of web and computer-based experiments on causal learning.
- Using Javascript or Qualtrics, Inquisit or PsiTurk software to help design online questionnaires and interactive studies to be presented online via Amazon’s Mechanical Turk.
- Developing both experiment web interface and underlying software for displaying stimuli.
- Building database-backed systems for managing experimental data.
- Writing scripts to preprocess and clean data.
Motivated students, particularly those with a background in machine learning and/or statistics or bayesian modeling, may be given the opportunity to help develop computational models of cognition relevant to these experiments.
Desirable Skills and Experience
- Previous Android app development experience is essential.
- Previous computer vision and/or machine learning experience is highly desirable.
- Web development experience is highly desirable (specify platform and languages).
- Other programming experience, especially with Python, Matlab, or R, is a plus.
How to Apply
This opportunity is open both as a work-study paid position and as an independent studies project for course credit (there is enough work for more than one person). If you are interested, please reach out directly to Prof. Buchsbaum (buchsbaum@psych.utoronto.ca) and CC her lab manager (manager.buchsbaum@gmail.com).
Lab Overview
We live in a world rich with causal structure. Events do not just occur randomly around us, they result from causal relationships — rain falling makes the ground slippery, flipping a switch makes the light turn on, turning a doorknob makes the door open. From learning to flip a light switch to using a remote control, as children grow up a major challenge they face is uncovering the world’s causal structure, including understanding the causes and consequences of other people’s behaviour. How do children learn these kinds of causal relationships, especially when the world presents them with sparse, ambiguous data or with multiple, conflicting sources of evidence? Are these sophisticated abilities unique to humans, or are they shared with other animals?
Our lab aims to answer these questions, using experimental and computational techniques to understand children’s causal and social reasoning abilities. By focusing on social and causal learning, we can address one of the core questions of cognition: How do humans construct sophisticated representations from relatively simple percepts, and how do these cognitive abilities develop?