San Diego Zoo’s Jenna Stacy-Dawes knows all too well the urgency of her research. Reticulated giraffes across the regions in northern Kenya that she studies have declined up to 70 percent in the past thirty years. Across Africa, giraffe numbers have shrunk by 40 percent in the same period, down to less than 100,000 individuals. Biologists are rushing to assess their numbers, movements, and preferred habitat to ensure protection of those areas. But the traditional way of counting giraffes using aerial surveys costs time and money, both of which are in short supply in the giraffe world.
Enter Wildbook, a software program developed by Portland-based conservation tech nonprofit Wild Me, which automatically identifies individual animals by their unique coat patterns or other hallmark features, such as fluke or ear outlines. With the help of Wildbook and the nonprofit Giraffe Conservation Foundation, Stacy-Dawes, a research coordinator at the zoo's Institute for Conservation Research, and her colleagues are able to blitz a giraffe population with photos over two days, upload the images and location data to their GiraffeSpotter database, and presto: a robust population assessment emerges. So far they’ve used Wildbook to assess giraffe numbers across three wildlife conservancies in northern Kenya.
"Before a population assessment wasn't something you could do in a weekend. It's incredible," Stacy-Dawes says. "It's been really helpful in allowing us to work faster and understand the population better than we ever really could have before." (Learn more: How to save the world’s tallest animal.)
By year’s end GiraffeSpotter will be publicly accessible so that everybody from park rangers to tourists on safari can upload their giraffe photos and location information to the online database. "We can increase our study efforts 10-fold by having tourists and citizen scientists be able to contribute to our research. It provides us with a huge data set that wouldn't have been accessible before," she says.
Welcome to the world of artificial intelligence in service of conservation, where some of hardest working additions to a research team aren't the lead scientists or interns but tireless computers. Just as artificial intelligence (AI) powers Amazon's Alexa, Gmail's spam filter, and Facebook's new friends suggestions, it’s now being used to help out the animal world. AI is well on the road to completing tasks typically done manually by researchers, from identifying individual animals from photos for population studies to categorizing the many millions of camera trap photos gathered by field scientists. Thanks to advances in computing power and machine learning, computers now have the ability to learn on their own using banks of data.
"There's a perfect storm of AI and camera trap technology in terms of understanding animals from images," says Robert Long, a conservation biologist at Seattle's Woodland Park Zoo, who has been collaborating with Microsoft to develop AI tools to help monitor rare carnivores in the Pacific Northwest using camera traps. "I think it's literally a revolution underway in terms of auto-identification of animals, whether it's from still cameras or video.”
In addition to Microsoft’s AI for Earth, which Long is working with and National Geographic Society helps fund, there’s Google’s Wildlife Insights too. Both are collaborating with researchers to roll out the automation of camera trap photo analysis. Microsoft’s program, a five-year, $50 million project that provides researchers access to AI tools to solve environmental challenges, is also working with Wildbook to lower the cost of new Wildbook databases. The goal is to move from start-up costs as high as tens of thousands of dollars down to an annual operating cost of $1,000.
Already Wildbook runs databases for 20 species, ranging from jaguars to zebras, but there are thousands of other species that computers could be trained to identify. Each Wildbook is typically used collaboratively by multiple organizations and research labs, with individual researchers owning their own data within the Wildbook database.
Wildbook's latest innovation is an "intelligent agent," or bot, that combs through YouTube every night to extract new whale shark videos, often uploaded by tourists and divers sharing their vacation footage. The intelligent agent locates and extracts still images of the whale shark from the video clip, so that the bot can analyze the shark’s unique constellation of spots and identify it. The bot also collects the date and location of the sighting (or solicits it from the video's uploader in the comments), then it submits the data to whaleshark.org's database, a Wildbook for whale sharks that catalogues individuals using this computer-driven photo identification. (Read more from National Geographic about the Wildbook for whale sharks.)
"We've been kind of stunned by how well the intelligent agent is doing its job and how much faster it's collecting data than a traditional human researcher," says Jason Holmberg, executive director of Wild Me. The intelligent agent currently works in five languages and averages about 30 video analyses a day.
“Letting this thing loose on YouTube, especially with migratory species that are out in the ocean, you get this chance of finding outlier sightings of animals where researchers just simply aren’t going,” says Jon Van Oast, a senior engineer at Wildbook and the brain behind the intelligent agent. “It goes places where the researchers just can’t for logistics reasons and funding.”
Since the intelligence agent began work in May 2017, it has located a total of 1,900 whale shark videos, and it’s continually improving its performance. In the past 30 days, it has logged over 500 encounters—twice as many as the most productive human spotters. "It's essentially a flood of free data that researchers are missing," says Holmberg. The data can be used by researchers to create population models, assess whether marine protected areas are boosting whale shark numbers, or identify new whale shark hot spots for conservation.
Population data collected through whaleshark.org was instrumental in informing the 2016 decision to increase the whale shark’s status from “vulnerable” to “endangered” in the IUCN Red List of Threatened Species, a global database that tracks and assigns conservation statuses. Meanwhile the same population data can help inform the creation and management of marine protected areas to help increase whale sharks’ numbers, which have halved over the past 75 years.
Wildbook is currently rolling out an additional intelligent agent to spot and log individual green sea turtles and hawksbill sea turtles in YouTube videos. By month's end, they plan to have the agent working on identifying giant manta rays, humpback whales, and giraffes from YouTube.
"We very much envision the ability for researchers to not be spending years curating their data but moving to a world of continuous monitoring where we can react and pivot to population numbers very rapidly,” says Holmberg.
How smart computers handle bad photos
Many of the digital images uploaded to Wildbook databases are taken by humans, but it’s not just user-taken photos that are fair game for computer vision-aided animal identification. One increasingly invaluable field tool for scientists to monitor where animals occur and how many of them there are is the camera trap, a remote camera that’s triggered by a motion or infrared sensor to snap photos of bypassing wildlife.
But camera trap images are often blurry, out of focus, or taken in low light. And the sheer volume of images taken by digital camera traps can number in the tens or even hundreds of thousands, often creating a bottleneck back at the lab, where someone has to manually view and log each image.
“There’s this community of people who are using images for important work, and they are extremely limited by the time required for humans to annotate all of those images, so the problem is really teed up nicely for AI to tremendously accelerate what conservation biologists do,” says Dan Morris, a principal researcher in the Microsoft AI for Earth program who studies computer vision image processing for conservation. He says that AI technology is almost good enough to be applied to camera trap photos, but it will likely have some limitations out of the gate, given how widely photo quality in camera trap images varies, from blurry nighttime images to out-of-focus shots.
Machine learning will not only usher in the automation of monitoring endangered species from camera trap data but also the automation of spotting poachers, something Morris describes as being right on the cusp of practicality: “You can’t put people everywhere but you can put cameras in a lot of places.” The potential for computer vision to help the planet encompasses everything from analyzing aerial imagery in arctic and savannah landscapes to monitor large animals to tracking forest recovery and loss from satellite imagery, to even monitoring plastic pollution using AI drones. "Computer vision can offer a lot not just for wildlife conservation applications but also sustainability more broadly,” says Morris. “Fundamentally it’s really about leveraging AI to save the planet.”