The Power of a “Negative” Story

Rita Yembilah, Canadian Poverty Institute, Ambrose University, Calgary

Evaluation Consultant, Institute of Peace and Development (IPD), Tamale, Ghana

Last May, that is, May 2022, I was in Long Beach, California, to attend a convening of funder, grantees and evaluators for the Catholic Sisters Initiative.  The Catholic Sisters Initiative is one arm of the philanthropic work of the Hilton Foundation. I was with a colleague in our capacity as Strategy Evaluation Partners (SEPs) for the CSI. The convening was about familiarizing with the portfolio, meeting other stakeholders in the Initiative and learning first-hand from Initiative staff and grantees the dynamics of working in their unique environment. An unplanned development during the convening sharpened a particular part of my internal dialogue, which I share today. Based on a brief conversation about qualitative data, the Initiative staff asked the group of us four evaluators to shed more light on qualitative data as valid data. This was to help assuage some of the discomfort around that topic given people’s bias towards fancy numerical data. In addition to extolling the virtues of qualitative data, I ended my 5-6-minute comments by suggesting thus: as a funder, the Hilton Foundation should not shy away from projects that have not gone according to plan. The funder could be receptive to hearing “negative” stories, without judgment; giving grantees the permission to be honest about why their projects did not go as planned. In the midst of that, I spoke the words “there is power in a negative story.” The phrase resonated across the room, even with the funder. Many contributors, over the course of the Convening, reverted to that point, repeatedly. Given I had not precleared the point, essentially risking speaking out of turn; there was a sense of relief.

Irrespective of the scale at which we work, “the evaluated” feel trepidation when evaluators come to the scene. What would you find that does not bode well for all the work we have been doing? Are you able to provide more context as to why we did not meet our targets? I am not getting my numbers and reporting season is coming. How we look at success and how the funder looks at success are not the same, so it makes the work difficult. If we do not get our numbers, we will not be funded. X organization is not meeting its targets so we may have to cut their funding. These are comments I have heard since I began evaluating with intentionality a few years ago. I have also encountered my fair share of staff who pushed back on a report for varied reasons, but invariably they do not want to look bad to funders, supervisors or some other top brass. Admittedly, some of these surprised, even shocked me, others made me feel sad, others made me feel conflicted.

All evaluations, at some point, are about learning what did not work and how to make improvements, but my experience suggests that although it is said, the spirit of this sentiment is muted. The question that arises for me relating to “program failure”, “missed targets” (or whatever we call them) is: Who falls through the cracks when we only look for the positive story? What trends are we missing when we focus on the positive story and minimize the blow of the negative story? What system level issues are we missing an opportunity to address because we are more intent on the ROI, the proverbial Return on Investment? What can funders know that they do not know because of the pressure, even imperceptible pressure, on grantees to be good custodians of funding received? If a project/program is intended to build capacity in NPOs and 35% missed their outputs and outcomes, is this something a funder analyses for and tries to problem-solve around? As evaluators, what is our role in changing the narrative around the negative story? The pressure to report a positive story? How do we position around the “negative” stories we unearth? If a program intended to help 80 youth feel empowered to discuss difficult topics reaches 52 youth (short of say an 80% benchmark), that’s 28 youth that have not been reached. Twenty-eight youth that “fell through the cracks”. The program may or may not be funded again, and that is the prerogative of the decision-makers, but what have we lost because we did not put the negative story under the microscope?

This article was also published in the March edition of the Canadian Evaluation Society (CES) Newsletter.