A Game-Changing AI Tool for Tracking Animal Movements

Vladimir Hodac/Shutterstock.com

Scientists are already using it to study octopuses, electric fish, surgical robots, and racehorses.

In a video, a rodent reaches out and grabs a morsel of food, while small, colored dots highlight the positions of its knuckles. In another clip, a racehorse gallops along a track; again, small, colored dots track the position of its body parts. In a third video, two human dancers circle around each other as those same dots unfailingly follow the sometimes fluid, sometimes jerky movements of their limbs.

These videos are showcases for DeepLabCut, a tool that can automatically track and label the body parts of moving animals. Developed this year by Mackenzie Mathis and Alexander Mathis, a pair of married neuroscientists, DeepLabCut is remarkable in its simplicity. It has allowed researchers to download any video from the internet and digitally label specific body parts in a few dozen frames. The tool then learns how to pick out those same features in the rest of the video, or others like it. And it works across species, from laboratory stalwarts like flies and mice to … more unusual animals.

Here’s one striking example. This video, shot in Costa Rica, shows a lichen katydid, an insect whose white protrusions perfectly camouflage its body against the white lichen on which it walks. DeepLabCut sees through the insect’s ruse, successfully labeling its feet, joints, and antennae. “I think this method is going to revolutionize behavioral science, including neuroscience and psychology,” says Venkatesh Murthy from Harvard University, who works with the Mathises.

A lot of research in those fields hinges on understanding what humans and other animals are doing by parsing actions that have been recorded on film. James Bonaiuto from the French National Center for Scientific Research, for instance, studies how the limb movements of tool-wielding people relate to the patterns of neural activity in their brains. “Many studies involve an army of graduate students painstakingly coding videos of behavior frame by frame,” he says. By automating that laborious work, DeepLabCut makes such studies much faster and more accurate.

“I’ve used commercial and academic video-tracking software, and even written my own. DeepLabCut surpasses all of them by a large margin,” adds Andres Bendesky at Columbia University, who is using the algorithm to study the fighting behavior of betta fish. “There’s been a need for software like it for a long time, and I expect it to be the standard in the field for a while.”

Ilana Nisky from Ben Gurion University of the Negev is using DeepLabCut to analyze how surgeons wield their needles, to better program robots that can assist in surgeries. “It works impressively well,” she says. “It can help us track the tip and the tail of the needle very well, even though the side that is facing the camera changes throughout the trial.”

DeepLabCut was born of necessity. Alexander was trying to understand the neuroscience of smell by watching mice as they track trails of scent. To do that, he needed a simple way of marking the very tip of their snouts on a video. “But the classic methods neuroscientists use all failed,” he says. Researchers often dab paint or glue reflective dots on body parts of interest, but you couldn’t do either on a body part as small and sensitive as a mouse’s nose. “After years of frustration trying commercial software, and other people’s solutions, nothing worked,” Alexander says. “We took a step back from doing experiments for a year to try and solve this problem.”

At its core, DeepLabCut is a modified version of DeeperCut, a neural network created by other researchers to detect and label human poses in videos. Such networks are very good at what they do, but you must first train them by showing them thousands of hand-labeled frames. And if you want them to label a different species, moving in a different way, you need to repeat this training step all over again.

The Mathises got around this cumbersome requirement by first pretraining their network on ImageNet, a massive online database of images. This step effectively teaches the network how to look at the world, and gives it a basic visual system. It can tell how to distinguish a strawberry from an airplane, or a cat from a dog.

After that, it’s much easier to teach the network to recognize something far more specific, like a rat’s paw, a betta fish’s fins, or a katydid’s legs. Instead of thousands of hand-labeled frames, you can get away with a few hundred, or even a few dozen. “We’re asking it, ‘You can see the world. Now, we want you to find these body parts, and we’ll give you a few frames [to start].’ And it can do it,” Mackenzie says.

This combination of versatility and reliability is unique. In past research, Avner Wallach of Columbia University relied on complicated algorithms that were specifically designed to track the positions of individual rodent whiskers. These trackers did nothing else, and they still produced enough errors that a couple of students had to regularly check the results. “Having a versatile, one-size-fits-all algorithm can definitely save a lot of work for many labs around the world,” says Wallach, who is now using DeepLabCut to track the movements of the Peters’ elephantnose fish, an African fish that senses the world by producing its own electric fields.

DeepLabCut is freely available, and researchers around the world have already been making good use of it—in ways that the Mathises couldn’t have foreseen. “We were contacted by a company that wants to predict if a particular horse will be a good racehorse,” Alexander adds.

“I get blind emails from people who are using it on videos of primates moving freely in the lab, people tracking zebras on safaris, people looking at complicated animals like octopuses and electric eels, people looking at surgical robots,” says Mackenzie. “Sports people are very interested in tracking pitching performance among baseball players. You could take your iPhone, film your kid playing, and go home and analyze their performance together.”