Notes from a Network-Centric Resource Project online discussion that took place on March 12th, 2020.
Network-Centric Resources are hard and take an incredibly long time to develop and get right. As we are building knowledge assets for communities and networks to use, it’s hard to measure success. How do we evaluate sharing ownership, enabling contributions and supporting collaboration?
See the Life-Cycle of a Network-Centric Resource.
The main points that came up during the discussion:
- Focusing on metrics, page visits, clicks, shares, etc. will have limited value in evaluation.
- At least have a minimal set of indicators that are easy to get to start learning.
- A great question is ‘would you recommend this to other people? why?’
- It’s helpful to know what you need to know before you start rolling out your resource, so you know what to measure, but this is often a ‘learn as you go’ situation – as it’s hard to imagine the impact your resource will have on real people.
- If your resource contributes to a theory of change and you have articulated that connection, that should be a basis for evaluation.
- Being in constant communication with your beneficiaries and trying to understand how providing feedback/input can be valuable for them.
- Asking people to fill out surveys will provide limited returns and appeal, instead of having a set of questions you want to have answered for use in interviews, conversations or other engagements can yield more valuable insights.
- For events, following the trail of collaboration and the ways people work together afterwards is a key indicator.
- Understanding how it gets carried forward from person to networks to communities should be a goal.
- Figuring out what you can attribute to your resource can be difficult, particularly if it’s contributing to social change. What’s best is understanding how it works within an eco-system or as a tactic within a broader strategy.
- Being upfront about value exchange within a project. So pro bono contributions come in return for community stories. So stating ‘if you use it, you need to tell us the story of how you used it’ upfront.
- The Western Regional Data Center’s Measuring Performance is really helpful.
- National Neighborhood Indicators Partnership – Monitoring Impact
- Mobilisation Lab’s Measuring People Power – https://mobilisationlab.org/resources/measuring-people-power
- The Mountain of Engagement chapter called “Let engagement lead the way” here: https://opensource.com/open-organization/resources/leaders-manual
- Do Big Good – Open Source Impact Measurement Resources
- Impact Cascade: Diagram for mapping out the steps to an impact goal (bit.ly/PublicImpactCascade)
- Civic Tech Impact Indicators: Collected from Code for America Brigade volunteers in 2019
- Civicus’s How to on the Net Promoter Score
We are particularly grateful to Ashley Fowler at Internews for suggesting this topic. Along with Ashley, we are also appreciative of Liz Hynes from The Narrative Initiative and Liz Monk from the Western Pennsylvania Regional Data Center for leading our small group breakouts. But super big thanks to the participants. During the discussion, we asked them to tell us what they were working on and challenges/successes in capturing and understanding indicators of success, so have included that here.
- Hilary Naylor, Amnesty International USA, Membership Empowerment Training Program. The METP is a volunteer-led program designed to increase the capacity of local groups (chapters) to fulfill their human rights goals. My current project is to engage groups in casework for individuals at risk around the world.
- Ashley Fowler, Technical Program Officer at Internews, working currently on two resources co-developed/maintained with Internet freedom communities. 1) The SAFETAG audit framework and interface, which we are in the process of making more user-friendly and accessible for both auditors and contributors. 2) The UX Feedback Collection Guidebook, a compilation of activities and resources designed to integrate feedback collection from at-risk users into existing digital security training frameworks.
- Suzy East, Project Manager at DataKind UK, we run a few different volunteer-led programmes helping nonprofits use data science. We have a lot of resources around these projects and would like to be better at collating and sharing them to help our volunteers and projects be successful. One example is our Google minisite for our Data Ambassador volunteers.
- Mary Joyce (she/her), Founder and Principal at Do Big Good. We are an impact measurement firm with a mission to increase the efficacy of social change work. Recent projects include developing a social impact measurement tool of Code for America’s Brigade Network (80+ chapters in US) and developing theories of change for grassroots climate groups in California. We are also developing an impact measurement system for Marguerite Casey Foundation’s Equal Voice Networks (18 regional networks in US). We use participatory design to develop impact measurement systems that center equity, evidence, and adaptation.
- Adriana Balcazar, Program Associate at Internews. I work with Ashley on the SAFETAG audit framework, which we are currently working to improve and make more user-friendly, as well as the UX Feedback Collection Guidebook.
- Liz Hynes, Program Manager @ Narrative Initiative. I work with a small distributed team on issues related to narrative strategy and narrative implementation. We have digital resources in a resource library, a newsletter, and occasional webinar. Interested in how we understand health of resource use and health of networks. Methods and indicators.
- Lorna Cumming-Bruce, Marketing Manager at Semble. I manage the creation of crowd-sourced resources and other content for grassroots charitable groups in the UK. These are pushed out through newsletter, social media and partner organisations. In order to improve the crowd-sourced resources and other content my team creates I need to work out the best indicators against which to judge success.
- Chad Sansing, I work at Mozilla on facilitator and newcomer support for MozFest; I often wonder about how to structure programs and opportunities for feedback to help us determine/measure/storytell a) how our work impacts others’ work, b) how community members contribute to MozFest, and c) how community members return to MozFest and its community in increasingly engaged roles over time.
- Neil Planchon. Co-Director at Foundation for Intentional Community :: FIC – ic.org. FIC supports and promotes the development of intentional communities as pathways towards a more sustainable and just world. Cohousing and CiviCRM ambassador and community organizer. Life and Leadership Coach. NTEN Oakland coordinator
- Greg Bloom – I lead the Open Referral Initiative. We are a network of people and organizations working to make it easier to share, find, and use information about the resources available to people in need. Our work is several degrees removed from the actual interactions between people and human service providers, so it’s a challenge to measure our impact.
- Liz Monk, Western Pennsylvania Regional Data Center, University of Pittsburgh, project manager for Civic Switchboard – a federally funded project working to connect libraries and community information networks. Challenges include having the capacity to capture qualitative data related to indicators of success. Local partner of the National Neighborhood Indicators Partnership (NNIP) which is a collaborative effort by the Urban Institute and local partners to further the development and use of neighborhood information systems in local policymaking and community building.