September 10, 2016

The Art & Science of Citizen Science

Ted Barone, Ed.D.

Like any good art, Citizen Science is a veritable stew of craftsmanship, expertise, inventiveness, and cultural awareness. It typically locates the rigors of scientific study in the context of pressing environmental challenges and relies on the diversity of volunteers in its implementation.

Citizen Science (CS) has the potential to fill huge gaps in the scientific study needed to resolve some of the most vexing problems we face globally. As CS adheres to precepts of effective science, it’s capacity to fill the gaps will be enhanced.
This paper introduces some basic principles to help CS improve its value to the scientific and public policy communities. We hope to follow it up with recommendations for more detailed procedures and protocols to advance the quality of CS.

Why Citizen Science?
Don’t we have enough scientists out there gathering data and telling us what we need to know? By way of an answer, let’s examine river/stream health as a case. We know that freshwater demand is increasing while its quality is being threatened by a growing population in the U.S. and worldwide (Minkman, Van Overloop, and Van der Sanden 2015). Providing water resources is a key ecosystem service for humanity (Buytaert et al. 2016).

The Environmental Protection agency is charged with describing the quality of waters in the United States. The agency found existing monitoring programs deficient for its purposes (Shapiro, Holdsworth, and Paulsen 2008) so it established the Wadeable Stream Assessment (WSA) which established a water quality baseline dataset by sampling 1,393 streams over a four-year period between 2000 and 2004. The WSA determined that 42% of U.S. streams were in poor condition, 25% in fair condition and 28% in good condition (Paulsen, Mayio, and Peck 2008). It has not conducted a major survey since.
Local, state and national governments have been making massive investments in ecosystem protection and rehabilitation (Gollan et al. 2012; Isaac et al. 2014; Juffe-Bignoli et al. 2016) but acute scarcities of data have created problems for efficient management of water resources (Buytaert et al. 2016; Minkman, Van Overloop, and Van der Sanden 2015).

We are in an era of budget cuts and high costs for water-quality monitoring (Hochachka, Fink, and Hutchinson 2012; Minkman, Van Overloop, and Van der Sanden 2015), CS may be the only practical and cost-effective way to document the broad ecological patterns at scales that are necessary to meet regulatory requirements. In addition, CS is needed to provide the necessary data to track and mitigate biodiversity losses over large geographic areas and lengthy time periods (Deutsch and Orprecio 2001; Dickinson, Zuckerbert, and Bonter 2010; Pimm et al. 2015; Tulloch, Possingham, and Joseph 2013).
The cost-savings value of CS can be substantial. The value of CS volunteers to Cornell University’s Project Feeder Watch is approximately $3 million per year (Dickinson, Zuckerbert, and Bonter 2010). A meta-analysis of 388 research projects using a total of 1.3 million volunteers found they provided in-kind value of between $0.7 to 2.5 billion annually which equates to 11 to 41% of the budget of the National Science Foundation (Theobald et al. 2015). CS is not free. There are costs for training of volunteers, equipment, publicity and coordination (Roy et al. 2012), but clearly there is a lot of value in a dedicated and passionate workforce that works for free.
Finally, CS has proven to be an invaluable tool for engaging citizens in the challenges most important to them – those in their back yard. Outreach by organizations, museums, scientific institutions, policy managers and politicians has educated and inspired millions of take responsibility for the health of their communities.

What is science?
Here’s a good working definition: Science is “knowledge or a system of knowledge covering general truths or the operation of general laws especially as obtained and tested through scientific method.” (Merriam-Webster 2016). There are a number of multifaceted parts to this definition, including system, general laws, and scientific method. The answer to our question could lie within any or all of these aspects.
As we attempt to answer the question, there are key variables to consider, each of which has its own complexities:
• Research Design: To what level of rigor is the study designed, implemented, and analyzed?
• Data Quality: What is the degree of reliability, validity, and trustworthiness of data to other scientists or the public?

• Usefulness: Does the citizen-based scientific effort contribute to “knowledge” through publishing, and/or impact on advocacy efforts, policy decisions, or resource management?
Each of these variables has a range of “truth” both in implementation and evaluation. Our task with this article is to explore the scientific literature about CS to better understand how citizen scientists can improve their collective efforts with respect to all three of these variables.

Illustrative Examples
It helps to take a look at a few examples to get a sense of the variation in CS research projects. We don’t have the space to describe each of these in any real detail but please note the different levels of expertise and research organization implied in each case.
• One of the oldest and most influential efforts has been the National Audubon Society’s Christmas Bird Count which started in 1900. It has been identified by the EPA as one of 26 leading indicators of climate change (National Audubon Society 2015; Tulloch, Possingham, and Joseph 2013).
• eBird is one of the largest online CS databases, logging two to three million records per month. In May, 2015, 8.5 million bird observations were recorded globally. It is a valued map of changes in movement and probability of occurrence at global scales (eBird 2016; Hochachka, Fink, and Hutchinson 2012).
• CS observations posted on Plant Watch in Canada found that the first flowering of 19 different species of plants had moved nine days earlier over the past decade (Crain, Cooper, and Dickinson 2014).
• Zooniverse.org is a cluster of projects using volunteers to analyze and interpret large datasets per the directions of research scientists. The amount and quality of data collected by volunteers in the various projects has been worth roughly 37 years of researcher time (Cox and Oh 2015).
• MiCorps, or the Michigan Clean Water Corps is a partnership-based program that includes state and regional organizations which provide technical expertise and local volunteers. The MiCorps staff provides training in standardized data-collection protocols and centrally analyzes all of the water samples. MiCorps partnered with Michigan State University researchers to study 77 lakes and was able to show a positive link between a zebra mussel invasion and presence of a dangerous toxin (Latimore and Steen 2014).
• 92% of all GBIF records of Butterflies and Moths (Lepidoptera) from the U.S. in the last decade were uploaded through iNaturalist(Loarie 2016).

Data Quality
A huge concern in all science is quality of the data and CS is no exception. Because CS data tends to come from research that covers large areas, many species, and lengthy amounts of time, there are likely to be a lot of data records. In such large studies, inherent tension exists between the quantity and the quality of data (Hochachka, Fink, and Hutchinson 2012; Sullivan, Aycrigg, and Barry 2014). These are not highly-controlled experiments in hermetically sealed environments that limit any unnecessary variables. CS projects are typically big in scale, take place in unruly environments, and need a lot of time to unfold, for patterns to emerge. But they make it possible to test hypotheses at space and time scales never before possible (Sullivan, Aycrigg, and Barry 2014).
Cheap and reliable environmental sensors have dramatically improved the capacity of CS to generate higher quality data. Since many of these sensors can be associated with smartphones, they geotag, and provide dates and times of observations, data has become ubiquitous and uploadable, saving huge amounts of time and reducing entry error (Buytaert, Zulkafli, and Grainger 2014; Isaac et al. 2014; Minkman, Van Overloop, and Van der Sanden 2015; Wiggins and He 2016).
So, CS has passionate, smart volunteers, it costs way less than having to pay workers, and there are all these great tools for people to work with. What’s the holdup?
A critical issue is the data. These are not professional scientists. It’s not easy to generate good quality data from CS projects. Working with volunteers injects a different set of variables to account for. For example, new observers tend to over-report rare species, under-report common species, and fail to report sightings they deem “uninteresting”. Also, volunteers tend to make observations nearer to cities, roads, and trails which are more degraded environments, while roadless areas, where ecosystems are typically more intact, are undersampled (Dickinson, Zuckerbert, and Bonter 2010). Additionally, volunteers tend to have a stake in the subject of the project. They care about the environment where they live. That’s a bias to consider.
Clearly, there are differences between good professional scientists and volunteers. The professionals are trained, and their knowledge about their area of study is generally deeper and more complex. But when volunteers get the right training and the research study is set up properly, then the data can become as reliable as that generated by the professionals. In the MiCorps program mentioned earlier, volunteer data is in agreement with staff sampling 90% of the time (Latimore and Steen 2014). Even a study of elementary and high school classes demonstrated that good training and procedures can result in data at the professional standard. The classes collected water-quality data (Dissolved Oxygen, salinity, temperature) using an Adopt-a-Stream protocol after a 45 minute lesson on proper techniques (Fogleman and Curran 2008).

Strategies to Improve Data Quality
There are a number of strategies to help improve data quality in CS. First, data collection protocols should be simpler and more objective rather than complex with reliance on deeper interpretation. Second, it helps to align collection protocols to the skills and knowledge of the volunteers. For example, a study could have volunteers identify species to the order or family level rather than to genus or species which require greater knowledge. Or do what was done in a New Zealand study of macroinvertebrates. They assigned certain indicator taxa to be identified to the genus or species level (and provided appropriate training to the volunteers) where other taxa were identified to the order level (Moffett and Neale 2015). More complexity could be given to more experienced observers, careful that complex protocols tend to engage fewer participants (Dickinson, Zuckerbert, and Bonter 2010; Fore, Paulsen, and O’Laughlin 2001; Gollan et al. 2012; Hochachka, Fink, and Hutchinson 2012). Third, the larger and somewhat messier datasets call for different informatics or methods for statistical design that account for the additional human variables described previously (Hochachka, Fink, and Hutchinson 2012; Isaac et al. 2014).
Effective training is essential to closing the skills gap between scientists and volunteers. Quizzes and games to practice and evaluate observer skill has proven effective (Dickinson, Zuckerbert, and Bonter 2010). Basing a CS project in existing hobbies or amateur expertise (such as with eBird) increases the quality of data and reduces training needs (Crain, Cooper, and Dickinson 2014; Roy et al. 2012).

CS programs have been found to be most effective when they are guided by experienced researchers, particularly when volunteers collaborate with scientists from design to analysis(Dickinson, Zuckerbert, and Bonter 2010; Gollan et al. 2012; Sullivan, Aycrigg, and Barry 2014; Tulloch, Possingham, and Joseph 2013). When the data collection protocols are more aligned with volunteer knowledge and skill, they have greater buy-in and enthusiasm for the project which results in more and better data (Aceves-Bueno, Adeleye, and Bradley 2015; Buytaert et al. 2016; Dickinson, Zuckerbert, and Bonter 2010; Fore, Paulsen, and O’Laughlin 2001; Jordan, Crall, and Gray, Steven 2015). This collaborative strategy is particularly important when working with more “marginalized” groups. Projects should reflect problems that come from the groups and their concerns. Their engagement to generate knowledge about the health of their own communities empowers them to take ownership and increase political control (Aceves-Bueno, Adeleye, and Bradley 2015; Buytaert, Zulkafli, and Grainger 2014). Additionally, to increase influence on public policy, it helps if there is collaboration between government staff and volunteers from beginning to end. The data is more likely to meet government information and decision-making needs. There is little evidence that CS data has much influence when it is “delivered” unsolicited to local governments (Buckland-Nicks 2015).
We cannot forget to mention one of the most important parts of science and that is the reporting and sharing of findings and relevant data. Scientific knowledge is constructed knowledge. CS research can meet all the criteria of excellence but if the findings and data aren’t available to other Citizen Scientists, professional scientists, policy-makers, resource managers, educators, and the public as a whole, then it won’t have much of an impact (Sullivan, Aycrigg, and Barry 2014). It would be helpful, in this technological age, if data could be shared and integrated through ‘cyber-infrastructures’, or database systems that are inter-operable and ensure consistent data standards (Roy et al. 2012).

Conclusion
Yes, Citizen Science is real science.
Citizen Science can be an important and effective contributor to the broad scientific knowledge-base. It can be cost-effective. It can succeed where traditional science cannot, particularly in studies that have large spatial and temporal (space and time) ranges. And, it can empower communities of all economic classes to take greater charge of their local environments.
Our research identified the following key strategies for developing successful CS projects:

  1. Strike a balance between data quantity and quality
  2. Facilitate wide use of data by publishing data and findings in a way that is accessible by other scientists and interested parties.
  3. Engage a diverse array of collaborators in all aspects of the project (Sullivan, Aycrigg, and Barry 2014)

Now, get out there and do some science!

Bibliography
Aceves-Bueno, Erendira, Adeyemi S. Adeleye, and Darcy Bradley. 2015. “Citizen Science as an Approach for Overcoming Insufficient Monitoring and Inadequate Stakeholder Buy-in in Adaptive Management: Criteria and Evidence.” Ecosystems 18: 493–506.
Buckland-Nicks, Amy. 2015. “Keys to Success: A Case Study Approach to Understanding Community-Based Water Monitoring Uptake in Governmental Decision-Making.” Master of Environmental Studies, Halifax, Nova Scotia: Dalhousie University.
Buytaert, Wouter, Art Dewulf, Bert De Bievre, Julian Clark, and David M. Hannah. 2016. “Citizen Science for Water Resources Management: Toward Polycentric Monitoring and Governance?” Journal of Water Resources Planning and Management 142 (4).
Buytaert, Wouter, Zed Zulkafli, and Sam Grainger. 2014. “Citizen Science in Hydrology and Water Resources: Opportunities for Knowledge Generation, Ecosystem Service Management, and Sustainable Development.” Frontiers in Earth Science 2 (26): 1–21.
Cox, Joe, and Eun Young Oh. 2015. “Defining and Measuring Success in Online Citizen Science: A Case Study of Zooniverse Projects.” Computing in Science & Engineering 17 (4): 28–41.
Crain, Rhiannon, Caren Cooper, and Janis L. Dickinson. 2014. “Citizen Science: A Tool for Integrating Studies of Human and Natural Systems.” Annual Review of Environment and Resources 39: 641–65.
Deutsch, William G., and Jim L. Orprecio. 2001. “Watershed Data from the Grassroots...Is It Enough to Capture the Trends and Turn the Tide?” In . Athens, GA.
Dickinson, Janis L., Benjamin Zuckerbert, and David N. Bonter. 2010. “Citizen Science as an Ecological Research Tool: Challenges and Benefits.” Annual Review of Ecology Evolution, and Systematics 41: 149–72.
eBird. 2016. “About eBird | eBird.” http://ebird.org/content/ebird/about/.
Fogleman, Tara, and Mary Carla Curran. 2008. “How Accurate Are Student-Collected Data?: Determining Whether Water-Quality Data Collected by Students Are Comparable to Data Collected by Scientists.” The Science Teacher, Community Collaborations, 75 (4): 30–35.
Fore, Leska S., Kit Paulsen, and Kate O’Laughlin. 2001. “Assessing the Performance of Volunteers in Monitoring Streams.” Freshwater Biology 46: 109–23.
Gollan, John, Lisa Lobry de Bruyn, Nick Reid, and Lance Wilkie. 2012. “Can Volunteers Collect Data That Are Comparable to Professional Scientists? A Study of Variables Used in Monitoring the Outcomes of Ecosystem Rehabilitation.” Environmental Management 50 (August): 969–78.
Hochachka, Wesley M., Daniel Fink, and Rebecca Hutchinson. 2012. “Data-Intensive Science Applied to Broad-Scale Citizen Science.” Trends in Ecology & Evolution 27 (2): 130–37.
Isaac, Nick, Arco Van Strien, Tom August, Marnix De Zeeuw, and David Roy. 2014. “Statistics for Citizen Science: Extracting Signals of Change from Noisy Ecological Data.” Methods in Ecology and Evolution 5: 1052–60.
Jordan, Rebecca, Alycia Crall, and Gray, Steven. 2015. “Citizen Science as a Distinct Field of Inquiry.” BioScience Advance Access (January): 1–4.
Juffe-Bignoli, Diego, Thomas M. Brooks, Stuart H. M. Butchart, Richard B. Jenkins, Kaia Boe, Michael Hoffmann, Ariadne Angulo, et al. 2016. “Assessing the Cost of Global Biodiversity and Conservation Knowledge.” PLOS ONE 11 (8): e0160640. doi:10.1371/journal.pone.0160640.
Latimore, Jo A., and Paul J. Steen. 2014. “Integrating Freshwater Science and Local Management through Volunteer Monitoring Partnerships: The Michigan Clean Water Corps.” Freshwater Science 33 (2): 686–92.
Loarie, Scott. 2016. “Lepidoptera Data,” August 21.
Merriam-Webster. 2016. “Definition of Science.” http://www.merriam-webster.com/dictionary/science.
Minkman, E., P.J. Van Overloop, and M.C.A. Van der Sanden. 2015. “Citizen Science in Water Quality Monitoring: Mobile Crowd Sensing for Water Managment in The Netherlands.” In Floods, Droughts, and Ecosystems.
Moffett, ER, and MW Neale. 2015. “Volunteer and Professional Macroinvertebrate Monitoriong Provide Concordant Assessments of Stream Health.” New Zealand Journal of Marine and Freshwater Research 49 (3): 366–75.
National Audubon Society. 2015. “History of the Christmas Bird Count.” Audubon Society. January 21. http://www.audubon.org/conservation/history-christmas-bird-count.
Paulsen, Steven G., Alice Mayio, and David V. Peck. 2008. “Condition of Stream Ecosystems in the US: An Overview of the First National Assessment.” Journal of the North American Benthological Society 27 (4): 812–21.
Pimm, Stuart, Sky Alibhai, Richard Bergl, and Ales Dehgan. 2015. “Emerging Technologies to Conserve Biodiversity.” Trends in Ecology & Evolution 30 (11): 685–96.
Roy, H.E., M.J.O. Pocock, C.D. Preston, D.B. Roy, and J. Savage. 2012. “Understanding Citizen Science and Environmental Monitoring: Final Report on Behalf of UK Environmental Observation Framework.” UK Environmental Observation Framework. London: Natural History Museum.
Shapiro, Michael, Susan M. Holdsworth, and Steven G. Paulsen. 2008. “The Need to Assess the Condition of Aquatic Resources in the US.” Journal of the North American Benthological Society 27 (4): 808–11.
Sullivan, Brian L., Jocelyn L. Aycrigg, and Jessie H. Barry. 2014. “The eBird Enterprise: An Integrated Approach to Development and Application of Citizen Science.” Biological Conservation 169: 31–40.
Theobald, E.J., A.K. Attinger, H.K. Burgess, and L.B. DeBey. 2015. “Global Change and Local Solutions: Tapping the Unrealized Potential of Citizen Science for Biodiversity Research.” Biological Conservation 181: 236–44.
Tulloch, Ayesha I.T., Hugh P. Possingham, and Liana N. Joseph. 2013. “Realising the Full Potential of Citizen Science Monitoring Programs.” Biological Conservation 165: 128–38.
Wiggins, Andrea, and Yurong He. 2016. “Community-Based Data Validation Practices in Citizen Science.” In , 1548–59. San Francisco: Association for Computing Machinery.

Posted on September 10, 2016 05:36 AM by tedbarone tedbarone | 2 comments | Leave a comment

May 24, 2016

Bioblitz on Lagunitas Creek in Marin County - June 4

• What: SPAWN (Salmon Protection and Watershed Network) and the California Academy of Sciences invite you to Bioblitz Lagunitas Creek!

• When: Saturday, June 4, 2016 from 10 a.m. to 2 p.m.**

• Where: SPAWN Office - 9249 Sir Francis Drake Blvd. (2.7 miles west of Samuel P. Taylor State Park campground or 2.6 miles east of Olema)

• Why: In the Fall of 2016, the National Park Service will remove an abandoned small town alongside the creek. The floodplain, creek banks, and riparian habitats will be restored to provide critical rearing habitat for coho salmon and steelhead trout. SPAWN and the CalAcademy will monitor species biodiversity and water quality changes as a result of the restoration efforts on a semi-annual basis. The June 4 Bioblitz will provide valuable baseline data for the project.

• How: In small teams, participants will use the iNaturalist app on their smartphones to record observations of all plant and animal species in the designated area and at the designated time. Your observations will be uploaded, experts will help identify the organisms you observe, and you will help the scientists and resource managers understand when and where organisms occur. It’s real science. It’s a lot of fun!

• Sign-up: Please sign up in advance at https://www.eventbrite.com/e/lagunitas-creek-bioblitz-tickets-25215352799

• For more information: Visit the Lagunitas Creek/Tocaloma Bioblitz website at www.inaturalist.org/projects/lagunitas-creek-tocaloma-bioblitz

Posted on May 24, 2016 05:35 PM by tedbarone tedbarone | 0 comments | Leave a comment

Archives