Saturday, December 11, 2004

King and McGrath’s Knowledge for Development: Comment 7

Comments on King and McGrath’ Knowledge for Development continued.

Monitoring and Evaluation

King and McGrath include the monitoring and evaluation programs of donor agencies in their discussion of knowledge systems within the agencies. This raises some questions in my mind.

Do monitoring and evaluation, as practiced by donor agencies, produce knowledge? Should they be expected to? Indeed, do they produce more information than disinformation?

In my experience, projects and program evaluations are done most often by hiring people to interview people. Often the interviewers are not trained in interview technique. Those interviewed can be expected to believe that their responses will influence future decisions about funding the follow-on activities. Often the lack of understanding of methodological issues by those planning and carrying out evaluations go far beyond lack of interviewing skills.

I vividly recall a call going out in a donor agency for “reliable people” to do “impact evaluations”. I have always assumed that those people selected were those who could be relied upon to make the agency look good, not those who could be relied upon to produce the most unbiased, complete and accurate picture of the impacts.

Self evaluation is often asked of those whose interests lie in making their efforts look good -- be they donor agency officers, project implementers, or host government officials.

There are all sorts of epistemological problems with evaluations as they are commonly done. They tend to compare project efforts and accomplishments with project objectives stated in pre-project documentation. There seems to me to be little reason to believe that such objectives were intended to be accurate when produced, nor that they could be accurate when projecting complex processes years in advance. Attribution of causality is invariably difficult in the complex evolution of development projects and programs. Often the important factors in multicausal, statistically indeterminate models are not fully identified, much less measured and given just weight in analysis. There are grave difficulties in counter-reality estimates – needed to measure the differences between what would have happened without the intervention versus what actually happened.

Certainly there is no alternative for fiduciary prudence in program and project management to monitoring efforts in progress, and evaluating efforts ex ante, during and ex post the project/program. Still, some considerable care should be used in evaluating the quality of information much less that of knowledge produced by monitoring and evaluation.

Timeliness

There is an issue of the timeliness of knowledge that King and McGrath might have considered in more detail. Knowledge that is achieved too late is of little help. Knowledge that is achieved too early must be saved until needed, and may be lost in the process. Just in time knowledge is perhaps to be preferred by utilitarian criteria, but knowledge may be also given an intrinsic value less dependent on timeliness.

No comments: