Skip to Main Content

A version of this story appeared in STAT Health Tech, our weekly newsletter about how tech is transforming health care and the life sciences. Sign up here to receive it in your inbox.

You’re looking at two versions of the same video of a moment in a single cell, captured under a powerful microscope. The red and yellow structures are mitochondria, and the inset magnified in the bottom left hand corner in each view captures a mitochondrion dividing.

The view on the left shows the raw data as it came off the microscope; you might think of it like a social media influencer’s first take, before any filters have been applied to get that Instagram-ready look. And the view on the right? That’s the same data after scientists processed it with software powered by deep learning, an artificial intelligence technique that’s swiftly gaining traction in biomedical science.

advertisement

The result? While Instagram presets often distort reality, the new imaging restoration technique demonstrated on the right comes closer to replicating it — offering scientists a higher-resolution, less-blurry, less-noisy view into the cell.

To learn more, STAT chatted with Uri Manor, an imaging scientist at the Salk Institute for Biological Studies in San Diego. Manor led the team behind the new research, which is documented in a paper recently posted to a preprint server.

advertisement

What issue did you set out to address?

When scientists want to get a high-resolution view of mitochondria dividing, they often shine a powerful laser upon the cell. But that laser power causes cells to become stressed, and mitochondria divide in response to that stress — a phenomenon known as phototoxicity-induced fission.

That’s a problem, because scientists want to see how mitochondria divide under normal conditions, not ones created by research. Scientists have long tried to get around that by using much lower image quality, but that approach creates its own problems because it makes it harder to detect the fission, or to see it with as much detail.

Manor said he and his team wanted “the best of both worlds” — low phototoxicity and high image quality. So they set to work developing a deep learning method that would do that.

How did you build your deep learning system?

Manor said he and his team acquired reams of images at high pixel resolution and low noise — and then degraded them, decreasing their pixel resolution and increasing their noise, to simulate what they would look like under the microscope.

The next step, Manor said, was to use those images to train a deep neural network to be able to learn how to take those really low-resolution, high-noise images — and convert them into high-resolution, low-noise images.

That gave Manor and his team a way to “scan fewer pixels and then use deep learning to fill in additional pixels and give us the resolution that we need,” he said. The result is demonstrated in the view on the right.

What is it exactly about the view on the right that’s so much better than the one on the left?

In the view on the right, Manor said, “you can more clearly see the structures that we’re interested in. On the left there’s a lot of noise, and it’s hard to see what’s happening … . A lot of the structures are blending in with each other, or blending in with the noise, so it’s hard to tell for example when a mitochondrion divides.”

The view on the right also offers scientists what Manor calls “a sanity check.” That’s because clearly seeing the fission on the right makes it easier to see the same thing in the grainier view on the left. “With any image restoration technique, it’s nice to have a sanity check to make sure that you’re not looking at something that’s totally made up,” Manor said.

How does your work compare to what other scientists are doing?

Manor pointed to other groups at the Max Planck Institute of Molecular Cell Biology and Genetics and the Chan Zuckerberg Biohub that are also using deep learning to remove noise from their microscope images. Another group, at UCLA, is trying to use the technique to enhance resolution.

A key contribution from Manor’s team, he said, was that its model was able to perform multiple operations on an image. “So it’s performing all of these operations at the same time — all of which are necessary to be able to do what we want to do, which is to be able to image in our microscope using far fewer pixels so that we can image faster and with less phototoxicity,” he said.

Why is it so important to visualize mitochondria clearly?

Mitochondria divide or fuse in the cell throughout the cell cycle and during differentiation processes — and how and when that happens plays an important role in cancer, metabolism, and neurodegeneration, to name a few. That’s why there’s a flurry of research seeking to understand how changes in mitochondrion dynamics occur during certain diseases or normal developmental processes.

“Our hope is that [the new deep-learning] method will allow us to be able to image these processes with fewer phototoxicity-induced artifacts and with higher precision,” Manor said.

What’s the next step in your research?

Manor said he and his team want to see how their system fares with existing software designed to detect structures and events in microscope images. The goal, Manor said, will be “to see if computers can actually detect structures after being processed with our software, thereby increasing the throughput and the accuracy of automated analysis methods.”

Comments are closed.