What I’ve Learned about Impact Assessment


I recently spent two days facilitating a workshop on how to evaluate the impact of data training and learning endeavors for Open Knowledge, several School of Data chapters and like minded organisations including DataKind UKSocialTIC from Mexico City, Code4SA from South Africa, and Metamorphosis from Macedonia.  At the end of the first day, one of the participants asked “what is ‘impact assessment’ anyway?” So we spent the first part of the morning on the second day unpacking definitions of both the words ‘impact’ and ‘assessment.’ In the discussion that developed about what pops into our heads when we say the words ‘impact’ and ‘assessment,’ words that came up were things like evaluation, measurement and learning. But the combining of the two words had some mystery and in some ways seemed rather generic, as it could also be used by an insurance company to assess a car crash.

DataKind UK's Feedback Lifecycle

DataKind UK’s Feedback Lifecycle

Whenever I get confused about definitions, I turn to Wikipedia for help and this time it led me to some interesting discoveries.  Namely that the definition of ‘impact assessment‘ points to a practice developed to evaluate the impact of government public policies, which is a much more rigorous and resource intensive process than what might be expected for resource strapped civil society organisations. As we looked at our own practices of evaluating our trainings we were quick to realise that the methodologies we implemented weren’t even close to how a government might conduct an impact assessment of a public policy.  The next question that emerged was: ‘why are we using the term ‘impact assessment’ in the first place?’ Immediately we could infer a line from governments to donors/funders and as grantees, our strong desire to please funders. Let me stress that this was an inferred line of reasoning, but it makes a lot of sense. (Anyone care to start a website for etymology of buzzwords in our sector?)

What is important is the ability to use lightweight methodologies that will help us learn how to improve our trainings in order to deliver greater impact in the long term. So we spent the rest of the morning examining what success would look like for the organisations and initiatives we were involved in and then developed sets of indicators that would help us identify our progress towards success.  While this was incredibly helpful process, there still seemed a fundamental flaw: the methodology for measuring the impact of a data training would like take much more resources than the data training itself.  We lingered in the problem space of how to create lightweight, low cost methods of measuring out impact.

We talked about data learning initiatives that had successful evaluations conducted and made a list of important elements:

  • Have baselines
  • Stay objective and get rid of any preconceptions, perhaps use an external evaluator
  • Get community buy-in/ownership of the process. Perhaps get then to set their own indicators
  • Be committed and have space to learn
  • Make sure you have good feedback loops
  • Have clear and transparent goals
  • Take the time and start your evaluation process early and do it often
  • Look for appropriate outcomes and indicators that are action oriented
  • Make it timely
  • Have the resources to do it, if not be clear and try more informal methods
  • Have consistency across programs
  • Document, Document, Document
Reviewing a time-line of indicators that leads to a vision of success

Reviewing a time-line of indicators that leads to a vision of success

We also had a ‘Failure Fest’ and talked about unsuccessful monitoring and evaluation projects and came up with a list of things to avoid:

  • Making it too complicated
  • Not being intentional
  • Wasting people’s time and money
  • Over editing survey’s
  • Waiting til it’s too late
  • Not documenting
  • Not referring to the documentation
  • Not apllying findings and dissimenating institutional knowledge
  • working alone and not engaging commmunity
  • not understanding the context of other actors
  • irresponsible data use
  • inflexibility to adapt locally
  • not evaluating the external evaluators
  • Reporting bias

So while the term ‘impact assessment’ might be inappropriate for civil society orgs, sharing methodologies for evaluation that leads to learning and improved projects is appropriate and is what is needed. Probably the most valuable piece of this workshop were the honest and frank discussions by the participants about how their evaluation methodologies were not as rigorous as they felt they should be, but they had gotten a better sense on where to concentrate their resources and efforts to keep learning and improve their initiatives.

Be sure to also read Zara Rahman’s excellent blog post about the workshop.

Some valuable resources: