Fostering equity and leadership: The rOpenSci Champions Program selection process

This post is adapted and abridged from the original, which appeared on the rOpenSci blog and was authored by Francisco Cardozo, Yanina Bellini Saibene, Camille Santistevan, and Lou Woodley

As part of our work with longtime client and partner rOpenSci, we’ve been supporting community manager Yanina Bellini Saibene with developing their champions program. 

The goal of the rOpenSci Champions Program is to enable more members of historically excluded groups to participate in, benefit from, and become leaders in the R, research software engineering, and open source and open science communities. This program includes 1-on-1 mentoring for the Champions as they complete a project and perform outreach activities in their local communities.

This blog post focuses on how participants are selected from a pool of applicants for the rOpenSci Champions Program – a multi-step process intentionally designed to ensure a diverse cohort of Champions and Mentors. 

If you are interested in learning more about community champions programs and whether setting one up is the right choice for your community, check out our resource page on this topic. Or if you are ready to launch and would like to partner with CSCCE, please reach out to info@www.cscce.org

Review process

The rOpenSci Champions selection process is designed to promote equity and diversity among participants. It involves five steps:

  1. Initial Application Review: The journey begins with rOpenSci staff examining all submissions to verify the eligibility and technical specifics of each application. This ensures that every candidate meets the basic criteria for consideration.
  2. Community Revision: Each application undergoes a detailed assessment by two members of the rOpenSci community, including the current mentors of the Champions Program. In this step, reviewers use a rubric to guarantee objective and thorough revisions based on commonly established criteria1.
  3. Consistency Analysis: A quantitative analysis is conducted to examine the consensus among reviewers. This step ensures that revisions are aligned, promoting fairness and minimizing bias in the selection process.
  4. Diversity Review: With a focus on diversity, the list of candidates with higher scores is then carefully reviewed to ensure it reflects a broad spectrum of backgrounds and regions. This may involve adding promising candidates to the pool to achieve a truly representative group of potential Champions.
  5. Final Selection by Mentors: The culmination of the process consists of mentors reviewing the top candidates to select their mentees. This helps ensure that mentors and champions are well-matched, with a shared passion for each other’s work. 

This process is central to rOpenSci’s desire to select a diverse group of Champions who are deeply committed to the community’s values.

Selection criteria: Cultivating a diverse community

In the second step of the review process described above, each application is revised and scored by two members of the rOpenSci team, based on the following categories:

  • Core Values: A strong adherence to values like respect, inclusiveness, and collaboration.
  • Community Involvement: Active participation in communities of practice, understood as a group of people exchanging knowledge. They can be communities in institutions and organizations (such as institutes, foundations or universities); non-profit organizations (such as R-Ladies or The Carpentries) or companies. We look for communities related to STEM, Open Science, Research Software Engineer, and the R community.
  • Project Proposal: Clarity, feasibility, and innovation of the project proposal.
  • Knowledge Sharing: A concrete plan to disseminate acquired skills and knowledge within and beyond rOpenSci.
  • Technical Skill: Proficiency in necessary technical areas. The program is not for beginner or expert R developers.
  • Motivation: Enthusiasm for joining and contributing to the rOpenSci community and willingness to dedicate adequate time to the program, considering existing professional obligations.

From here, the two scores are analyzed using Cohen’s Kappa, a score that assesses the level of agreement beyond chance2, and the results of this analysis are used to finalize the selection of Champions and Mentors. 

Key insights

In 2024, the rOpenSci Champions Program attracted a global cohort of applicants, and the multi-step selection process described above supported fairness and representation from diverse demographics and geographic regions.

Utilizing the rubric and conducting thorough data analysis enable a balanced selection of participants, effectively bridging the gap between experts and beginners. This approach was instrumental in identifying and cultivating the talents the program aims to nurture.

Transparency throughout the evaluation process is paramount in upholding the principles of fairness and equity. By openly communicating the criteria and methodology of the selection process, rOpenSci aims to foster trust and accountability, reinforcing their commitment to inclusivity and diversity.

Thus, the rOpenSci Champions Program remains committed to adapting and refining its processes to meet the evolving needs of its community and to address the challenges and opportunities in open science and research software development. 

To find out more about how this approach is being applied to the rOpenSci Champions program, check out the longer version of this post on the rOpenSci blog

Acknowledgements

The inaugural cohort of the rOpenSci Champions Program was funded by the Chan Zuckerberg Initiative and led by Yani Bellini Saibene. It was co-designed with input from Camille Santistevan and Lou Woodley at the Center for Scientific Collaboration and Community Engagement CSCCE, who contributed to the rubric discussed in this blog post. Francisco Cardozo led the analysis of the robustness of the application review process.

Footnotes
  1. The rubric is designed to be flexible and adaptable to the specific needs of the program. We may adjust the scoring options based on the agreement analysis to enhance clarity and agreement. 
  2. Cohen’s Kappa is a tool used to measure how much two raters agree on categorizing items. Unlike simply looking at how often they agree, Kappa also considers the chance that they might agree just by guessing. Initially, we found the agreement rate to be 0.48. However, after making adjustments to how we score, the agreement rate improved to 0.53. This suggests that the raters are more in sync than before. Kappa values can range from 0, meaning there’s no real agreement other than what might happen by random chance, to 1, indicating perfect agreement. In our case, a Kappa of 0.53 shows that our raters agree to a satisfactory extent.