Monday, May 01, 2006

Knowledge from forensic science

The Washington Post this morning has an article in which it describes the limitations and uses of polygraph machines as "lie detectors".

I think most people seriously misunderstand the nature and use of evidence produced by polygraphs, finger print analysis, DNA and other aspects of forensic science. This is a serious problem in systems that depend on trial by jury (and perhaps also serious in systems which depend on a professional judge to make decisions of guilt or innocence -- since they too may lack understanding of the nature of this evidence.) Indeed, it illustrates a basic problem with the use of evidence in the creation of knowledge and understanding.

Part of the problem with forensic evidence is that during the 20th century, popular media presented forensic science as infallible. Sherlock Holmes would identify the culprit without the shadow of a doubt from the slimist piece of physical evidence. Lots of TV programs now feature crime scene investigation and coroners who are never wrong.

Things don't work that way in the real world. In the case of the polygraph, for example, there is a chance that a response identified by the operator as false will in fact be true, and a chance that a response identified as true will in fact be false. I don't think that the probabilities of those events are quantified in the literature, but even if they were, the probabilities would not generalize. Thus some operators are better than others, and some people are harder to test than others. Indeed, in the case of a senior official of the CIA, such as the one in the news today, it seems quite possible that she knew a lot about how to avoid detection in a polygraph based interrogation. There are also differences in the situation, and in some situations the person being interrogated is likely to be in an emotional or physical state that makes interpretations of physiological changes difficult. Moreover, the polygraph examination is a social process, and differences in the social situations of operator and person being interrogated may influence the outcome. Thus, while "Type 1" and "Type 2" errors must exist, it would seem impossible to quantify their probabilities for a specific interrogation.

In the case of fingerprint identification, I understand that the accuracy depends on the number of points of comparison that the expert is able to extract from a fingerprint -- the more points, the more precise the identification. I suspect that very few jurors understand this simple fact. I suspect that fingerprint examiners can't really say that there is such and such a probability that this fingerprint was made by this person, and such and such a probability it was made by another. And I suspect that even if they do make such an approximation, and it is in the ball park, people would not know how to utilize the information.

DNA interpretation seems more quantitative, but the probability estimates made for such evidence don't seem to include the probability that there would be a mix-up of the samples tested, nor a laboratory failure, nor indeed willful misconduct on the part of someone in the chain of evidence.

In theory, Bayesian statistics offers an approach in which one can add each additional information to that already on hand and adjust the probabilities of guilt and innocence accordingly. In reality, no one really does so. But simply understanding the ideas behind Bayesian statistics helps. We start with a priori probabilities of a person being guilty or innocent. Each piece of evidence changes our estimates of the conditional probabilities, including each piece of forensic evidence. Some forensic evidence is such as to have very strong effects on the a posteriori probabilities. Thus blood type evidence can essentially rule out paternity in some cases, as can DNA evidence. More generally, forensic evidence just makes guilt more or less likely.

Lawyers also distinguish between "fact witnesses", "expert witnesses" and "character witnesses". Unfortunately, a fact witness may be wrong, or may not tell the truth as he sees it -- the testimony may not be "factual". Each piece of evidence from such witnesses again should be weighed, and used to adjust the probabilities assigned to the alternative outcomes. Character witnesses unfortunately come with built in biases, and character evidence seems especially hard to interpret.

I was impressed recently that there seemed to be a view that a political appointee in the FAA would be an "expert witness" about what could or could not have been done with statements by a suspected terrorist to stop 9/11. It might indeed have been interesting to hear such testimony, but how could one have evaluated it? How accurately could anyone predict such a thing? How much would the answer be influenced by the political considerations?

The legal system, with its judges, courts and juries, has defined rules of evidence and procedures for coming to judgment. The results are what I have termed "legal knowledge". That is the results are verdicts of guilt or innocence, as determined by authorized legal processes. (There are, fortunately, other legal ways to ponder evidence which help to deal with situations in which judges and juries are likely to have grave difficulties weighing the evidence and making valid determinations.)

I have suggested that many institutions in our society have their own, alternative knowledge systems. All bureaucratic organizations have such a system, as do all legislative bodies, the military, and markets. Thus, weighing evidence is obviously important in other fields.

We live with what seems to have been a very bad weighing of the evidence with respect to the existence of weapons of mass destruction in Iraq, and of the likelihood of the Saddam Hussein government transferring such weapons to Al Qaida operatives. The evidence seems especially dodgy in the field of espionage, and the need for expert witness and judgment especially intense in the field of WMDs. One interpretation of the failure that occurred in the run up to the Iraq war is that there was a serious failure in the interface between the political knowledge system of the Bush administration, and the professional knowledge system of the intelligence knowledge system of the United States and its allies. (From what little I know, the knowledge systems of the intelligence community seem to be explicitly defined, and the result of some generations of evolutionary modification.)

I wonder whether there is regularly an "impedence mismatch" between knowledge systems of different institutions? Do we regularly see knowledge from bureaucratic systems misunderstood by the legal or legislative systems? Or knowledge from scientific systems misinterpreted by governmental environmental bureaucracies? Perhaps!

If one thinks about knowledge for development, one might question the effectiveness of other knowledge systems. Do we use evidence well to make economic, agricultural, public health, or other policies and decisions? Certainly there has been a lot of criticism of knowledge systems in donor agencies such as the World Bank and USAID. Perhaps some more explicit discussion of the nature of the evidence that we use, the kinds of witnesses we employ, and the processes we use to weigh that evidence might bear fruit in better development policies and projects.

1 comment:

UAB MSFS Program said...

excellent commentary, john. as a forensic science educator, i try to emphasize the reality vs. tv battle over infallibility. i linked to your comments in a newly formed forensic science blog (beta). -jgl