This post summarizes the report of the “Scientific Advocacy/Ambassador Programs Survey” by the 2017 Community Engagement Fellows Program (CEFP) Advocacy Ninjas project team (Melanie Binder, Heidi Laješić, Stephanie O’Donnell, Allen Pope, Gabrielle Rabinowitz, and Rosanna Volchok – with help from CSCCE Director Lou Woodley and former staff member, Rebecca Aicher) and was contributed by the authors.
Editorial note: Since the Advocacy Ninjas did their work and wrote up their report, we refined and published CSCCE’s Community Participation Model. In it, we describe a CHAMPION mode of participation, in which a community member is motivated to take on more responsibility for the success, sustainability, and/or running of the community. This might look like advocating for the community on social media, running a working group or local chapter, or taking the lead in creating and maintaining documentation to support the community. Champion programs, therefore, formalize or promote these activities, and offer recognition and training for members who participate. They empower emergent leaders, create nodes of trust within the community, and support myriad community needs and goals. Visit our new resource page for more.
Champions Programs can go by many names: Fellowships, Ambassador Programs, Advocacy Programs, and more. Whatever the name, a “Champions Program” is an organizational mechanism designed to empower community members to form meaningful relationships and become more active in moving forward a community’s mission.
There are many reasons why a scientific community (manager) might launch a Champions Program. For example, Champions Programs are central to many scientific communities by:
- Expanding engagement and/or getting in touch with scientific community members directly,
- Producing new content from unique points of view,
- Amplifying the reach of an organization’s activities, and/or
- Empowering and/or providing training to scientific community members.
We wanted to explore two things:
- What are the commonalities and differences across these programs in science and technology?
- What makes them successful?
To answer question one about what these Champions Programs look like:
- Objectives: The majority of respondents ranked ‘Disseminating Knowledge & Resources’ (88%) and ‘Cultivating Community Culture’ (63%) as “Extremely Important” program objectives.
- Application/Recruitment: Most programs collect applications via formal application form or email to program staff.
- Activity & Tracking: Respondents indicate that most participants carry out both online and in-person activities, and the top-cited mechanism for tracking participation (66%) is through self-reporting (e.g., surveys).
- Resources & Incentives: Participants were provided with a wide range of resources intended to facilitate their work. Nevertheless, the top-cited incentive (79%) for Advocacy Program participation was public recognition.
- What’s in a name? Names can signal selectivity, sector, and community size. For example, “Fellows” tend to be the best supported and be part of large non-profits, “Ambassadors” are often larger, less selective programs, and “Champions” are mid-range programs usually affiliated with academic communities.
We found that program objectives, community size, outreach/application procedures, and selectivity were consistent across program age – but older programs had the largest budgets while newer programs offered more incentives for participation. Perhaps unsurprisingly, program size, community size, and resources are all correlated.
To answer our second question, the majority of scientific community managers reported Champions Programs as overwhelmingly successful – but of course, we are likely biased as these are our own programs! So, what does success actually mean? When asked, respondents said things like:
- “Success is actually having champions and having them do anything.”
- “Helping researchers achieve gains in their research, education, scholarship and/or creative activity…”
- “Consistent long-term engagement, volunteers taking on substantial roles, lateral community growth.”
A few caveats to wrap up: With an overall sample size of 37 respondents, we cannot assume statistically significant relationships within the data. In addition, the respondents were drawn largely from our own networks. These factors make it difficult to draw definitive conclusions, but we believe there are still many interesting takeaway points here.
And if this post did give you some food for thought, incited you to ask some questions, or just made you want to discuss with other like-minded scientific community professionals – just click here to find out to get more involved with CSCCE.
Thank you to AAAS (in particular Dana Burns), CSCCE (in particular Lou Woodley and Katie Pratt), and the whole 2017 CEFP team for help with formatting the survey, collating the data, laying out the report, preparing the presentation, and seeing this report through to end; to the people and organizations who filled out the survey for your contributions; to the whole 2017 CEFP cohort for their discussions, feedback, and support.