How do you communicate with a robot using visual gestures? How can a computer recognize, using video, what is the chord that a guitarist is playing? How it can decide how many cells there are in a microscopic image of an embryo? These are trivial topics for a relatively trained human, but we don’t really understand what goes on in the brain that makes it so easy to solve these problems. Part of what I do is to design algorithms that give the expected output for a visual task. The other part is trying to reverse-engineer visual intelligence. That is, to figure out how pixels with colors turn into structures that have cognitive meaning (say, a circle, a cell, or a cat). At CGSB my work is focused on tracking cell boundaries and lineages in time-lapse bright-field recordings of early mouse embryos in association with an RNAi-based screen to catalog early developmental phenotypes. Application of this work to human embyros in the IVF clinic is geared toward the development of automated methods to assist in the prediction of developmental potential.