When people object to the invasion of privacy that this would involve, they are dismissed. Surely the prevention of these terrible tragedies would justify a minor intrusion in the form of a screening tool.
The focus on privacy and rights issues is reasonable. But it ignores a more difficult statistical issue that would almost certainly doom to failure any form of mental health screening.
Let’s see why.
Imagine that the number of people who will commit mass murder is 0.1% of the population – meaning that one in a thousand will someday take out their guns and start shooting.
In fact, the number is much, much smaller than this. If there were so many perpetrators the murder rate would be enormously higher than it is. Having an inflated figure like this minimizes (rather than exaggerates) the problem we’re going to focus on, however, so it’s a good working estimate.
Next, imagine that psychologists have developed a screening tool that is 90% accurate at identifying people who will become mass murderers. In fact, this is hugely optimistic. Few psychological tests have this level of accuracy and it is virtually inconceivable that we will ever have one so good at identifying the propensity to mass violence.
So: a 90% accurate test. What’s the error rate in people identified as being at risk?
If you said 10%, you’re wrong.
The correct answer is “It depends on the proportions of targets (truly dangerous people) and misses (people who will never do this) in the population.”
In this example, the error rate is about 99.1%.
It can’t be.
It is. Let’s see how this comes about.
Imagine a town of 10,000 people (or a school district with 10,000 students, if all we care about is school shootings). We’re going to test everyone.
We know that 0.1% will go on to commit mass murder if we don’t intervene. That’s 10 people in this idealized town. The test is 90% accurate, so it correctly identifies 9 of them as being at risk. One of them is missed. He (let's assume he's male for the moment) is a false negative.
We know that 100% - 0.1% = 99.9% of people will never commit this sort of crime. In a town of 10,000, that’s 9,990 people. Our test has a 10% error rate, so it mis-identifies 999 people as being at risk. These are our false positives.
How many people does the test say are at risk overall? 999 (the false positives) + 9 (the real targets) = 1008.
Of this 1008, what proportion are really the people we want? 9 / 1008 = .0089 = 0.89%.
How many are errors? 100% - 0.89% = 99.11%. We’re going to intervene with over 1000 people in this town even though we know that 99.1% of them are misidentified.
There’s the rights issue.
(Of course, we have also correctly identified 8,991 people as not being at risk, and they're not. Pretty good. But we could do even better if we did no testing and assumed no one was at risk, and our error rate overall would only be 0.01% - but then we'd miss all of our problem people.)
The error rate for our positives (the false positives) doesn't include all of our mistakes, because we also missed one of the ten who are at risk. This is a false negative (the test declared them safe even though they weren’t).
Okay, but this is an artificial example. What if we plug in figures that are closer to the real world?
The proportion of errors increases the more rare the targets are in the tested population. So if mass murder is only committed by 1 in 10,000, or a slightly more likely 1 in 100,000 (lifetime), the error rate rises enormously.
The errors are also higher if the test is less accurate – which is likely. As I’ve said, it’s vanishingly unlikely that we will ever have a test as accurate as 90%. As the accuracy declines, the error rate also rises.
Some readers will point out that there are two types of error: false positives (the 999 citizens said to be at risk who are not) and false negatives (the 1 in 10 future shooters who is missed). I’ve assumed that these two error rates are the same – 10% in both cases. Can we really assume this?
But if you increase the sensitivity of the test to ensure that everyone at risk is identified, you also increase the number of false positives. So the 99.11% error rate would be even higher.
And if you decrease the sensitivity of the test so that fewer people are mistakenly identified as being at risk, you also start missing the real targets. So the whole point of the screening is quickly lost.
The result: mass screening of a population to identify relatively low-frequency targets simply does not work well unless you have a near-flawless test – and usually you don’t.
Does this apply elsewhere?
I’ve used mass murder for the example, but the same principle operates in all forms of testing for low-frequency characteristics in a population: suicide, psychosis, other forms of mental illness – and many types of physical health screening as well.
This is why we keep hearing that many of the screening tools used in physical medicine are problematic. Even if the tests are reasonably good, they often produce high numbers of false positives among the healthy population. It may still be useful to use many of them, but one has to keep in mind the true error rate. We cannot assume that if a test is “90% accurate” that this means it will have only 10% errors. In most cases, the error rate will be much higher.
What does this say about psychology’s ability to prevent mass murder?
The prevention of these crimes is an extremely difficult task. As I’ve said elsewhere ("Tragedy and Mental Health"), the idea that mental health services, properly enhanced, will reduce the rate of mass murder is simply mistaken.
Distress, depression, bullying, and mental illness are common aspects of the human condition. For the most part, they do not result in mass casualty events.
If we want to take mass murder seriously, we need to stop pointing to the mental health professions as the potential saviors. We are not, and in my view we never will be.
But what else can we do?
Well, uh, about those guns …
Why are you on about this?
|Stop looking at us. |
There is no "other."
Every time there is a mass murder in the United States, people point to the problems associated with the gun culture. They argue that putting some sensible restrictions on gun availability, registration, and ownership is the only reasonable step.
Then the National Rifle Association winks, Yoda-like, and says “No, there is another” and they look sidelong at the mental health professionals over in the corner.
Well, you can stop winking, Yoda. We'd love to help, but we do not have the Force on our side.
We are not your solution, and we never will be.
My new book, How to be Miserable: 40 Strategies You Already Use has hit the bookstores, Amazon, Indigo, and other outlets.