Susanne Rafelski and her colleagues had a deceptively simple goal. “We wanted to be able to label many different structures in the cell, but do live imaging,” says the quantitative cell biologist and deputy director of the Allen Institute for Cell Science in Seattle, Washington. “And we wanted to do it in 3D.”
That kind of goal normally relies on fluorescence microscopy — problematic in this case because, with only a handful of colours to use, the scientists would run out of labels well before they ran out of structures. Also problematic is that these reagents are pricey and laborious to use. Moreover, the stains are harmful to live cells, as is the light used to stimulate them, meaning that the very act of imaging cells can damage them. “Fluorescence is expensive, in many different versions of the word ‘expensive’,” says Forrest Collman, a microscopist at the Allen Institute for Brain Science, also in Seattle. When Collman and his colleagues tried to make a 3D time-lapse movie using three different colours, the results were “horrific”, Collman recalls. “The cells all just die in front of you.”
Imaging cells using transmitted white light (bright-field microscopy) doesn’t rely on labelling, so avoids some of the problems of fluorescence microscopy. But the reduced contrast can make most cell structures impossible to spot. What Rafelski’s team needed was a way to combine the advantages of both techniques. Could artificial intelligence (AI) be used on bright-field images to predict how the corresponding fluorescence labels would look — a type of ‘virtual staining’? In 2017, Rafelski’s then-colleague, machine-learning scientist Gregory Johnson, proposed just such a solution: he would use a form of AI called deep learning to identify hard-to-spot structures in bright-field images of unlabelled cells.
Read the rest of this article to hear how this solution came to life