The interplay of equipotential self-selection, transparency and range of tasks

The interplay of equipotential self-selection, transparency and range of tasks

(Wondering how to add your blog posts to this blog? Have a look here: Get involved)

How do these factors condition one another?

A characteristic of peer production is equipotentiality (equal + potential). This means that in theory everyone has the right to contribute to tasks of their interest, without the need of any formal entry credentials, like a university degree, which is usually used in academia to pre-judge the ability of someone to perform a task or take a role. In peer production, people self-select the task they want to contribute to, and their ability to perform it is then judged afterwards by the community, by verifying the quality of the work itself (Kostakis & Bauwens, 2021).

In order for equipotential self-selection to work in practice, a range of tasks needs to be available on the platform, as well as transparency on which tasks are possible and open at the moment. It means also that the necessary resources need to be accessible on the platform.

In other words, even if users are in theory free to contribute to whichever task they like, but there is only one type of tasks available to users on the platform, one can not really speak of self-selection. Also, if there are tasks available in theory, but users cannot find open tasks, because the platform does not provide affordances for users to discover them, or if they can find them, but they cannot access the necessary resources (like data or questions from other users), there cannot be actual equipotential self-selection in practice.

How to enable equipotential self-selection?

Here are some questions you could think about if you want to enable equipotential self-selection on your platform:

1) Are users from all kinds of backgrounds allowed to contribute to your platform or do you require entry credentials like a university degree? If you require credentials, reflect if they are really necessary for some tasks or if a a posteriori validation of the work by the community could replace the a priori judgement of ability.

2) Think about the range of tasks that users can contribute on the platform. Are there tasks that are only accessible to e.g. administrators, but that could be opened for other users? Are they free to identify and work on new tasks?

3) Can users find open tasks on your platform? How can they find them?

4) Are the necessary resources to work on tasks accessible to users?

References

  • Kloppenborg, K., Ball, M. P., & Greshake Tzovaras, B. (2021, May 23). Peer Production Practices: Design Strategies in Online Citizen Science Platforms. https://doi.org/10.31235/osf.io/rw58y
  • Kostakis, V., & Bauwens, M. (2020). Grammar of Peer Production. The Handbook of Peer Production, 19-32.

Katharina
Katharina

PhD student at CRI (Center for Research and Interdisciplinarity) in Paris, experimenting with a user-centered approach to support the peer-production of knowledge in citizen science.

comments powered by Disqus