The Greg Mortenson/ 3 cups of Tea fabrication/fact story matters.
It matters because we (try to) trust nonfiction authors, nonprofits, and ratings agencies to be truth tellers.
It matters because we have several ratings agencies that came up with very different "ratings" of the Central Asia Institute - and now we don't know whom to trust in this regard. 60 Minutes cited research from AIP on the Central Asia Institute, Mortenson's charitable foundation. AIP (The American Institute of Philanthropy) raised several concerns about the organization's financial statements. At the same time, however, Charity Navigator, another rating group, was giving CAI four stars, its highest marks. (I checked CN at 8 pm PST Sunday April 17, 2011 - after watching the episode of 60 minutes. There was a four star rating. At 12:30 pm on Monday April 18, 2011 the four star rating was still there, with a flashing red "Donor Advisory")
AIP noted a worrisome lack of audited statements. There is also a fair amount of confusion about what percentage of book sales and speaker fees goes to the charity, what goes to Mortenson, and what goes to the publisher/speakers agency. Here's one paragraph from the AIP report:
Charity Navigator usually gets criticized for having its rating systems rely too heavily on financial information. This case makes it appear as if it didn't review that financial information carefully enough (or at least not as carefully as AIP).
"If CAI is going to take credit for Mortenson's speaking engagements as a program expense of the charity, it makes sense that it should be entitled to a portion of any revenues, such as speaker's fees, generated at these events. Nowhere on CAI's 2008 tax form does it report revenue from speaker's fees, and CAI would not respond to AIP's question as to whether or not the charity receives any of the fees or ticket sales from these events."
What is a donor to do when one rating agency gives a four star rating on an organization that raises red flags for another rating agency?
This is so inside baseball you may be rolling your eyes. As a hyperbolic mind game, compare the role that ratings organizations of nonprofits are trying to play to the role that Moody's and S & P play. Those two companies rated the subprime mortgages that banks turned into mortgage bonds that subsequently became known as "toxic assets." We know where that got us. The banks depended on the raters to package a product that they could sell. If donors depend on raters to inform their donations, then there better be some accuracy, reliability and credibility to the raters and their processes.
If nothing else, this is a really unfortunate, highly visible case of the problems with #embeddedgiving - if "up to 7% of sales goes to charity" is advertised, there is a 0-7% range at work. It is not easy to track these distributions (as AIP notes). From a perception standpoint, if you asked Mortenson's 1000+ person audiences where their money was going I bet they'd say "to educate girls in Pakistan and Afghanistan." They wouldn't say "0-7% of it helps girls, the rest helps Mr Mortenson and his publishers."
And that's just the "boring" financial stuff. I also got into a bit of twitter back and forth about who is benefiting and how much. Sunday night I sent out this tweet:
Which led to this back and forth:
Part of the 60 Minutes story (VIDEO) highlighted this report from Jon Krakuaer, in which he claims that Mortenson exaggerated statements about how many schools he built. The 60 Minutes team visited schools that were empty, questioned primary sources Mortenson features in his story who denied his descriptions of them, and provided documentation that questioned the number of school children served.
I have no idea if Krakauer is out to get Mortenson for some reason, if school was out of session, or to what degree the numbers of schools or beneficiaries may have been exaggerated - and that's the point. These doubts should effect what we think of the work. Unlike my twitter colleague, I don't agree that "there may be exaggeration, doesn't discount work done." It absolutely discounts the work done - millions of dollars to educate tens of thousands of school children is one thing. Millions of dollars to build fewer schools and educate fewer school children is something else all together. It is exactly a discounted effect. And it is a troublesome perception and trust issue.
Here's the short version - if the professional raters can't determine what funds are going where, how do you expect a small donor to do so? If the raters can't agree on the financials, let alone on whether or not the claimed number of schools were built and kids were educated, what's a donor to do? We've got a concentric circle trust problem if we can't depend on the nonprofits to report correctly or the rating agencies to rate properly (or at least consistently).