Communal validation on Public Lab

Communal validation on Public Lab

(Wondering how to add your blog posts to this blog? Have a look here: Get involved)

What platform are we talking about?

Public Lab aims to empower people to address environmental justice issues through community science and open technology. They focus on the collaborative development, sharing, reuse and adaption of DIY scientific tools.

How does communal validation work on Public Lab?

Tasks on Public Lab are mainly open-ended and there is a focus on creative problem-solving and exploration. Thus, which solution to a problem can be considered as right or sufficient depends largely on the project, so quality control and validation features need to be more open ended than, for example, on iNaturalist, where the communal validation consists in users agreeing on a species and there is usually one correct answer from a finite list of taxons. Feedback works mostly via text and discussion-features on Public Lab: For example, users can replicate activities they tried themselves to report their experiences. Apart from that, there is a comment function for research notes, blog posts, questions or issue briefs).

Thoughts

Developing quality control systems for open-ended tasks is not easy. There is not one single solution and the notion of right may vary widely with the context and the requirements of the project. For example, someone who wants to track some health variables to gain an overall understanding of factors that might influence their health could be perfectly content with rather rough, irregular or subjective data, while sensors built to be used during surgeries need to be undoubtedly very precise and reliable.

Discussion features are easy to implement and can be appropriate and a good solution for open-ended tasks. However, I think they should be complemented with other, more specific feedback mechanisms, if possible. Public Lab’s “I did it” feature that encourages replicating activities of other users and reporting the steps taken and give feedback is a good example: The author of the activity and other people interested in replicating it get information on if and how the protocol works for other people in other contexts, which can be a sufficient quality control for do-it-yourself monitoring projects.

For more thoughts on alternative ways of creating, valuing and interpreting data in citizen science, I recommend the article “Just good enough data” by Gabrys et al. (2016)

References

Further reading on alternative data creation and validation in citizen science:

  • Gabrys, J., Pritchard, H., & Barratt, B. (2016). Just good enough data: Figuring data citizenships through air pollution sensing and data stories. Big Data & Society, 3(2). https://doi.org/10.1177%2F2053951716679677

Katharina
Katharina

PhD student at CRI (Center for Research and Interdisciplinarity) in Paris, experimenting with a user-centered approach to support the peer-production of knowledge in citizen science.

comments powered by Disqus