Facebook’s project could make MRIs more efficient.
Researchers at Facebook’s AI Research lab (FAIR) are working to make MRI machines operate faster by reducing the amount of data they have to gather to compose an image, the company announced Monday, Aug. 20 in a blog post.
The project, which is a collaboration with New York University School of Medicine, will use 3 million MRI images collected by NYU that have been stripped of patient names and identifying information, according to Facebook. This data will also be made open to the public, so other researchers can tackle the same problems.
Here’s how Facebook thinks its technique will work: Current MRI machines take anywhere from 15 minutes to more than an hour to scan because they’re capturing an immense amount of data. If the machines were able to work off of less data, they would be able to scan and process that information more quickly. The artificial intelligence would look at that incomplete data and generate synthetic data to fill in the gaps that’s true to the real world. The goal is to make the MRI scan 10 times faster.
It’s not as crazy as it seems, either. There’s a great deal of existing research where AI is being trained to fill in the blanks of traditional photography, and Facebook has some of the world’s top experts in visual AI. For instance, in a technique called super-resolution, AI makes a blurry image sharper by relying on images of similar objects it’s seen in the past. Other researchers are working on reconstructing partially-hidden human faces, which is great for fixing those images where wind blows hair in your face—and state surveillance. Facebook itself has developed AI that can generate fake eyes for a person and edit them into an image if they’ve blinked in a photo.
Of course, there should be caution when taking this research into the real world. The data being generated by research labs for traditional images aren’t perfect yet, and artificial intelligence is prone to catastrophic failure if it sees something it wasn’t trained to deal with. This typically happens because there’s not enough data for the AI to properly learn. In facial recognition, people of color typically aren’t represented in datasets, so AI has difficulty even categorizing these people as human. Such “biased” datasets could be the difference between life and death when not considered for medicine.
This isn’t Facebook’s first entry into the medical space. CNBC reported earlier this year that the company’s hardware research lab was looking to couple anonymized user data from hospitals with user data Facebook had collected. Facebook would reportedly have used this to tell users when they showed signs of needing medical attention, but Facebook commented that the project never left the planning stage. Facebook has also used AI algorithms to predict when users might intend to kill themselves, so the social media company can offer help.
The U.S. spent nearly $3.5 trillion on health care in 2017, and that number is only expected to rise. Every major tech company from Google and Amazon to IBM and Apple is looking to get a cut of that cash, especially as the U.S. health care system is seen as outdated and inefficient. If Facebook can build a health product that doesn’t freak people out, it could be a welcome addition to its slowing revenue growth.
There’s no timeline for how long the collaboration will last, but Facebook writes that there’s the potential to bridge this research into work with other kinds of medical imaging, like CT scans.