iNaturalist Computer Vision Explorations

One of our goals with iNaturalist is to provide a crowd-sourced species identification system. This means that if you post a photo of a species you don't recognize to iNaturalist, the community should tell you what you saw. On average observations take 18 days to be identified by the community, with half of all observations identified in the first 2 days. As iNaturalist grows, keeping this identification rate steady requires an ever increasing burden on a relatively small group of identifiers. Fortunately, there have been major advances in machine learning approaches like computer vision in the past few years that might help share the burden with identifiers. Our goal is to integrate computer vision tools into iNaturalist to help the community provide higher quality identifications faster as iNaturalist continues to grow.

iNaturalist's explorations into computer vision began in mid-2016 as one of Alex Shepard's side-projects. This work soon became limited by the hardware needed to efficiently train deep neural networks. Fortunately, NVIDIA donated two Graphical Processing Units in December of 2016. Around this time we serendipitously met Grant Van Horn and the rest of the Visipedia team through their recent work with the Cornell Lab of Ornithology on the Merlin Bird ID App.

The Visipedia team adapted their code for training and testing image classification models using the TensorFlow open-source software library to work with iNaturalist observations and we got this running on the NVIDIA hardware. Training image classification models works by feeding them large sets of of labeled images. In our case, the images are photos from iNaturalist observations and the labels are their species level identifications. Once trained, the model can be used to identify images by receiving unlabeled image and assigning labels to them. This is more-or-less what the iNaturalist computer vision demo does.

What exactly did we train the model to do?

As of April 2017, iNaturalist had around 5,000,000 'verifiable' observations. We use the term verifiable to describe observations that have all the necessary data quality attributes (eg. photos, locations, not pets) to be eligible to become 'research grade'. Research grade observations have identifications that have been vetted by the community. These 5,000,000 observations represent around 100,000 distinct species. If we consider just Research grade observations, in April of 2017 we had about 2,500,000 observations representing around 73,000 distinct species.

The number of observations per species is uneven. Some species like the Grove Snail have many (300) research grade observations while others like the Jamaican Tody have relatively few (6). We call this large set of species with just a few or no observations the 'long tail'. We know that it extends out to around 2,000,000 described species most of which have no iNaturalist observations yet.

There are 13,730 species that have at least 20 research grade observations. We chose this as the data threshold necessary to include a species in our model. Technically, this number is closer to 10,000 species since we took steps to ensure that each species had at least 20 distinct observers to control for observer effects. We are moving a new species across this data threshold every 1.7 hours as new observations and identifications are added to iNaturalist. This means every observation you post or identification you make works to improve the model!

How does the demo use this model?

The demo runs your image through the computer vision model and displays the top 10 returned species labels. Because not every possible species is represented by a label in the model (only 10,000 out of a possible 2,000,000 species) the demo also displays a coarser recommendation such as 'Grasshoppers (order Orthoptera)' that we can be more confident in even if all the possible species aren't covered by the model. Fortunately, most observations (85%) posted to iNaturalist do fall within this labeled set of species with 15% falling in the long tail of species beyond the data threshold (the out-of-sample set).

If location, date, and/or taxonomic information is provided along with the image, the demo uses spatio-temporal data from the iNaturalist database (e.g. which butterflies have been seen nearby at this location and date) to weight the computer vision results. For example, a visually similar species that hasn't been seen nearby might be down-weighted, while a species seen nearby might might be included in the top 10 results even if its not yet represented in the computer vision model.

Next steps

We are currently working to test the recommendations made by the demo to understand how well it performs and what changes we can make to improve performance (e.g. tweaking the weights). We are also working on improving the computer vision model itself both by updating it with new data, experimenting with the types of data to train it on, and exploring new types of models. We're hoping an upcoming iNaturalist competition sponsored by Google at the CVPR 2017 conference will result in creative new ideas for how to improve the model. Lastly, we're working to integrate this technology into the iNaturalist site. Our initial step will be to build a semi-automated species chooser into the mobile apps to help add species names to newly created observations.

Timeline

April 19, 2017: iNaturalist computer vision demo launched.

June 29, 2017: Computer vision integrated into iNaturalist iOS app v 2.7

July 14, 2017: Computer vision integrated into the iNaturalist web identify tool. Find it under the 'Suggestions' tab and choose 'Source: Visually Similar'

September 5, 2017: Computer vision integrated into iNaturalist Android app v 1.7.3

September 21, 2017: Computer vision integrated into web observation uploader and observation pages

June 14, 2019: Vision Model Updates

March 18, 2020: A New Vision Model

July 13, 2021: New Computer Vision Model

2022+: Computer Vision Model Updates on the iNaturalist Blog

Press about this work

App combines computer vision and crowdsourcing to explore Earth’s biodiversity, one photo at a time, By Colleen O'Brien, Mongabay, August 30, 2017

Finally: An App That Can Identify the Animal You Saw on Your Hike, By Ed Young, the Atlantic, July 27, 2017

Identify Anything, Anywhere, Instantly (Well, Almost) With the Newest iNaturalist Release, By Eric Simons, Bay Nature, July 17, 2017

iNaturalist Launches Deep Learning-Based Identification App, By Sue Gee, i-programmer, Jun 18, 2017

AI App Identifies Plants and Animals In Seconds, NVIDIA, Jun 9, 2017

AI Plant And Animal Identification Helps Us All Be Citizen Scientists, By Emily Matchar, Smithsonian, Jun 7, 2017

Revised on January 9, 2024 02:45 PM by bouteloua bouteloua