You are viewing an obsolete version of the DU website which is no longer supported by the Administrators. Visit The New DU.
Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

Reply #307: The FP thing. [View All]

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
Home » Discuss » Topic Forums » Guns Donate to DU
DanTex Donating Member (734 posts) Send PM | Profile | Ignore Mon Sep-19-11 12:00 AM
Response to Reply #302
307. The FP thing.
Edited on Mon Sep-19-11 12:14 AM by DanTex
We might have to take this part by part because it's getting long, and (believe it or not) I occasionally have other things I need to do besides argue about guns on the internet... Anyway, I'll get to what I can.

So the reason I bring Kleck up is that, the pro-gunners always adopt super-skeptical viewpoints to most of the gun studies, but then when it comes to Kleck, a significantly shakier result scientifically, suddenly the skepticism disappears and the pro-gunners are willing to accept all sorts of questionable assumptions and omissions.

You could resolve the FP thing pretty quickly by simply finding the part of the Kleck's paper where he acknowledges the FP sensitivity issue, and demonstrates that in this particular survey the FP rate is well below 1%. I'll give you the link again, in case this time you decide to actually search for this.
http://www.guncite.com/gcdgklec.html
And I mean not just generalities, but something like "We need an FP rate of well below 1%, which is admittedly a tall order, but here is why we are able to achieve this". You ask what kind of evidence I am looking for, but, see, that's not my problem, that's Kleck's problem -- he's the one making the extraordinary claim, remember. Maybe he could point to other surveys with FP rates below 1%, and show that he used similar countermeasures.

In any case, you don't find any justification of the sub-1% FP rate. Instead you get stuff like this:

If one were committed to rejecting the seemingly overwhelming survey evidence on the frequency of DGU, one could speculate, albeit without any empirical foundation whatsoever, that nearly all of the people reporting such experiences are simply making them up. We feel this is implausible. An R who had actually experienced a DGU would have no difficulty responding with a "no" answer to our DGU question because a "no" response was not followed up by further questioning. On the other hand, lying with a false "yes" answer required a good deal more imagination and energy. Since we asked as many as nineteen questions on the topic, this would entail spontaneously inventing as many as nineteen plausible and internally consistent bits of false information and doing so in a way that gave no hint to experienced interviewers that they were being deceived.

Suppose someone persisted in believing in the anomalous NCVS estimates of DGU frequency and wanted to use a "dishonest respondent" hypothesis to account for estimates from the present survey that are as much as thirty times higher. In order to do this, one would have to suppose that twenty-nine out of every thirty people reporting a DGU in the present survey were lying. There is no precedent in criminological survey research for such an enormous level of intentional and sustained falsification.

The banal and undramatic nature of the reported incidents also undercuts the dishonest respondent speculation. While all the incidents involved a crime, and usually a fairly serious one, only 8% of the alleged gun defenders claimed to have shot their adversaries, and only 24% claim to have fired their gun. If large numbers of Rs were inventing their accounts, one would think they would have created more exciting scenarios.

By this time there seems little legitimate scholarly reason to doubt that defensive gun use is very common in the U.S., and that it probably is substantially more common than criminal gun use. This should not come as a surprise, given that there are far more gun-owning crime victims than there are gun-owning criminals and that victimization is spread out over many different victims, while offending is more concentrated among a relatively small number of offenders.


You know, in science you're supposed to bend over backwards to question your own results, point out all possible flaws or alternate explanations, etc. But Kleck is doing just the opposite, adopting a haughty and dismissive tone towards anyone who might question the results. As if to say "some idiot douchebags might want to deny that this awesome study is flawless, but they're obviously wrong, after all we asked up to 19 questions. So from now on nobody sane should doubt that I am right."

Kleck emphasizes how hard it is to "lie" (more precisely, give an incorrect answer), but what he misses is actually how easy it would actually be. You could exaggerate, you could recount an episode from a long time ago, you could claim someone else's DGU as your own, you could claim than an OGU was actually a DGU. Doing any of these would make it pretty easy to answer "up to 19" questions, most of which are fairly routine "where did it take place ... did you know the offender ... did you shoot ...". And then there's the fact that, as you can personally attest to, gunners most certainly do play out DGU scripts in their heads, and they do consider DGUs to be positive things, and DGU stories do circulate among gunners, meaning most gunners inclined to give a false DGU would have a reference story.

More importantly, notice the second paragraph here, where he claims that if the survey were flawed "one would have to suppose that twenty-nine out of every thirty people reporting a DGU in the present survey were lying." Now, while this is technically true, it is grossly misleading. What he is doing is dressing up a 1% FP rate as "29 out of 30" people lying. Based on what you have been saying, I think your understanding of statistics is sophisticated enough to see why this is misleading. Loosely speaking, it's because "people reporting a DGU" is not a naturally occurring subset of the people surveyed, since it depends on the survey answer. No, that is not a technically precise description, but I think you get what I mean.

And I do hope you'd agree that Kleck at least has the duty to point out that a 1% false positive rate would invalidate the whole study. This is not a small issue, and not mentioning it is a pretty significant omission, and in fact, I actually think the journal editors should have made Kleck put something like that in there (though admittedly I don't know anything about the standards or traditions of this journal). The journal did invite Hemenway to write a rebuttal, but in my opinion the admission that sub-1% FP is very low belongs in the original paper. Kleck is trying to give the impression that the results of this study are only otherwise explainable by a huge wave of dishonesty, but in reality, only 1% of the people surveyed would have to give incorrect answers. Switching around the numbers like this is something I'd expect (from anyone on any side of any debate) in a political press release, but not from a scientist.

He goes on to point out (without citations) that there is no precedent in criminology for this. Of course, the question is, do the other studies he is talking about here have a 1% positive response rate. Because, as you and I now agree, if he is talking about a bunch of surveys with 10% or more positive response rate, that's a different story, and you and I should also agree that this comparison would then be misleading. Again, if getting the FP rate below 1% is routine, then the least he could do is provide a few explicit examples. But he doesn't. In fact, even in his rebuttal to Hemenway's rebuttal, he still fails to come up with any examples of surveys with such a low FP rate. He mentions some surveys with a high FN rate, but as you and I know, a high FN rate doesn't imply a low FP rate, and the FP rate is key here.

Granted, DGUs are very rare if by very rare you mean "in the range of 1% or less" (accepting your numbers for the sake of discussion). But this is not what Hemenway is saying. He is saying that DGUs are very rare, and by "very rare" he means "so much lower than 1% that they invalidate Kleck's numbers and the corroborating evidence generated by over a dozen other studies." That application of the “rareness” argument is what makes Hemenway’s point circular.

Again, Hemenway's argument is not circular. His argument is that, due to FPs, Kleck's study is consistent with both the hypothesis that the DGU rate is 1% or 0.05%. Hemenway is correctly pointing out that there is no way to know what the FP rate is for sure. If the FP rate is close to 1%, then the DGU rate would be around 0.05%. If the FP rate is close to 0.05% then the DGU rate is around 1% (again these numbers for argument's sake). So the point is that, unless Kleck has some real evidence that the FP rate is well below 1%, Kleck's study doesn't help distinguish between those two hypotheses, which means it provides no new information about the DGU count. Hemenway is not simply assuming that DGUs are so rare as to invalidate Kleck's study -- he's just saying that they could be very rare despite the results of the survey, and that to determine the DGU rate, you have to look at other evidence. The conclusion that 0.05% is closer to the truth comes from this other evidence. Evidence like NCVS.

Which brings us to the second part of argument, which is external validation. Yes, I omitted this in my last post, but I mentioned it in another post a while back. The point is that if you look to other sources of data to validate the DGU number, you consistently find that Kleck's numbers don't make any sense by an order of magnitude.

Now, Kleck will point out that there are 10-20 other DGU studies that support his number, but these studies use a very similar methodology and therefore are subject to the same difficulties. Another instance of Kleck's misleadingness is to suggest that NCVS numbers are "anomalous" because it is inconsistent with a whole bunch of phone surveys. Of course, NCVS is not just one study, it gets repeated every year (or is it every other year, I forget and am too lazy to look it up). So what we have is a situation where two different methodologies consistently result in different estimates of the DGU number, and both methodologies have been repeated several times.

A word on NCVS. In order for Kleck's numbers to be right, NCVS must have a false negative rate of 95% for DGU reporting. And, unlike the "29 out of 30" number Kleck presents, the 95% false negative rate is not similarly misleading, because "people who had a DGU" is indeed what I described as above as a "naturally occurring" subset of the study group. This is because whether someone actually had a DGU does not depend on what they said to NCVS (as opposed to "people who said they had a DGU", which does depend on the response). So, for Kleck's numbers to be right 95% of this naturally occurring group of people would have to have given incorrect info to NCVS.

I won't go on too much about external validation, because I wrote a lot about it in that other thread, but the fact remains, it poses a serious challenge to the Kleck numbers, because there are multiple sources of external validation that fail pretty severely. And one general principle of experimental science is that a measurement corroborated by different kinds of evidence is a lot stronger than a measurement found using the same measurement technique over and over. That's why the fact that 10-20 or however many phone surveys back the Kleck numbers is not nearly as significant as the fact that police reports, gunshot wounds, self-defense killings, all these and other numbers match up very well with NCVS and not at all with Kleck.

I will point out that Kleck's responses to the external validation issue are grossly inadequate, to the point that either he is really missing some things or he might be purposely trying to skate by, avoiding the substantive questions and instead answering straw arguments. To give just one example, he claims that very few DGUs get reported to the police, even though his survey results showed that over 60% get reported to the police. Again, see the other post of mine for more detail on Kleck's inadequate responses.

I will mention one additional "sanity check" that the Kleck numbers fail: the number of lives supposedly saved by DGUs.
Since as many as 400,000 people a year use guns in situations where the defenders claim that they "almost certainly" saved a life by doing so, this result cannot be dismissed as trivial. If even one-tenth of these people are accurate in their stated perceptions, the number of lives saved by victim use of guns would still exceed the total number of lives taken with guns. It is not possible to know how many lives are actually saved this way, for the simple reason that no one can be certain how crime incidents would have turned out had the participants acted differently than they actually did. But surely this is too serious a matter to simply assume that practically everyone who says he believes he saved a life by using a gun was wrong.

So to start, 400K is completely preposterous. This is because that is some 20-30X higher than the homicide rate. This would imply that, without DGUs, the US would have by far the highest homicide rate in the world, 2-3X the likes of Honduras or El Salvador. But even if you are prepared to believe that, you realize that if this many lives were actually saved by guns, this would show up in the case-control studies like Branas or Kellermann, because whatever minor complaints you may have about coding of guns in nearby cars or whatever else would never be able to obscure an effect this strong. You see, if guns stop 400K homicides per year, this would imply that people without guns would be at far greater homicide risk, after all, they would be missing out on this essential safety tool. Since gun owners account for less half of the population, then why are we not observing 400K homicides among the non-gun-owning population? Is it that gun owners live a lifestyle that puts them at 20X (more actually) homicide risk, and that they need all those DGUs just to "break even"?

The answer to all that is, of course not. So you get the "if even one-tenth of these people are accurate" line (and even one tenth is very hard to believe for similar reasons). Here's the thing. Once you start finding numbers that need to be reduced by a factor of 10X to even be plausible, that means your estimates are completely worthless. If you are off by a factor of 10, it means that there is something severely wrong, you are way off the mark, and the number could just as easily be off by a factor of 100 or 1000. And in this particular case, you get all sorts of numbers that need to be divided by 10X or more to make sense, this is just one example, see that other post of mine for more.


You keep talking about hard data and facts; Supposedly I’m afraid of them. So let’s look at some data and facts. I’ll use mostly your data and reasoning.

1) 1% or 2.5M is a result that can easily result from false positives—source: your post 262
2) highest annual estimate of criminal gun use = 847,652 as of time of Kleck’s study source: NCVS as cited by Kleck
3) if 2,500,000 = 1%

then

847,652 = .34%

That’s almost exactly 3 times as small. So let’s apply your (and Hemenway’s) logic:

Very good point, and you would already know the answer if you read some of the other papers on DGUs besides Kleck's (for example Hemenway). The answer is explained here (see table 1). You see, NCVS is not the only estimate of criminal gun use. It is also possible to do a Kleck-style phone survey, and if you do that, predictably, you get a much larger number of criminal gun uses than NCVS reports. And, as that bulletin explains, if you compare the DGUs versus gun crimes estimates from phone surveys, you find a lot more criminal uses. If you compare DGUs versus gun crimes from NCVS, again you find a lot more gun crimes. There is only one way to conclude that there are more DGUs than gun crimes. Do you know what it is?

You guessed it! You take the phone survey methodology for DGUs, and the NCVS methodology for gun crimes. The reason NCVS gives lower numbers in both cases seems to be primarily because NCVS asks about victimization first, and only later about details like gun use (also the fact that NCVS is face-to-face and is superior in other ways also contributes to the accuracy). This means, that, with NCVS, you get less FPs and more FNs in both cases. And if you go through the external validation I discussed, there is pretty strong evidence that NCVS, while it probably underestimates both gun crimes and DGUs slightly, is much closer to the mark than the phone surveys. But, even if forget all that, and insist that the Kleck methodology is superior, you should use that methodology for both DGUs and criminal uses. Don't you agree? I mean, it would be silly to use the Kleck phone survey methodology to measure DGUs, and then NCVS to measure criminal gun uses, and then compare those two numbers, because of the obvious methodological differences. Yet that is exactly what Kleck does.


OK. On to the Wolfgang thing, you probably missed his clarification (emphasis mine):
For those who have not read Volume 86, Number 1 of the Journal of Criminal Law and Criminology on Guns and Violence Symposium, I would like to make clear that I had been asked to write only a commentary, not an original research article. I focused my commentary on an article titled Armed Resistance to Crime: The Prevalence and Nature of Self-Defense with a Gun by Gary Kleck and Marc Gertz.

Let me read the first and last paragraphs of the commentary that I originally made, titled A Tribute to a View I Have Opposed.

The first paragraph reads:

I am as strong a gun-control advocate as can be found among the criminologists in this country. If I were Mustapha Mond of The Brave New World, I would eliminate all guns from the civilian population and maybe from the police. I hate guns--ugly, nasty instruments designed to kill people.

The last paragraph of my commentary reads as follows:

The Kleck and Gertz study impresses me for the caution the authors exercise and the elaborate nuances they examine methodologically. I do not like their conclusions that having a gun can be useful, but I cannot fault their methodology. They have tried earnestly to meet all objections in advance and have done exceedingly well.

The usual criticisms of survey research, such as that done by Kleck and Gertz, also apply to their research. The problems of small numbers and extrapolating from relatively small samples to the universe are common criticism of all survey research, including theirs. I (Page 618) did not mention this specifically in my printed comments because I thought that this was obvious; within the specific limitations of their research is what I meant by a lack of criticism methodologically.

http://www.saf.org/LawReviews/WolfgangRemarks.htm

So, to review... You'll note that, first, he points out it was only a commentary, not a research paper, meaning that he hadn't gone through all the material in complete detail. And, of course, that other part I highlighted indicates that by no means was KG paper was immune from general criticism of survey research, which includes stuff like watch out if you're positive response rate gets down near 1%. Finally, you'll note that in the excerpt you quoted, he holds out for the possibility that Frank Zimring or Philip Cook might find a flaw in the KG work. And in fact, none other than Philip Cook soon did come out with a paper on DGUs, making many of the points I have made here.

It is worth repeating that a lot of what Cook and Hemenway and myself (and others e.g. McDowall) are saying is precisely that when you try and validate the survey results from KG with external and often more concrete sources of data (police reports, homicides, gunshot wounds, etc.), this is where it becomes evident that the KG numbers are due to something other than legit DGUs, most likely FPs. And this is right in line with Wolfgang's comment about the problems with survey research. Nobody is denying that Kleck made a big effort to keep FPs down, but (contrary to Kleck's brazen assertions) this in itself is not enough to accept the results as correct, particularly in light of the other data available.

OK. That's it for now. I may get to some of the other stuff tomorrow if I have time, although as I've mentioned several times, I prefer to deal with the substance itself. To me the criminology vs pub health vs economics thing is kind of silly. Just look at the papers, look at the research. Also, if you really think the gun-in-nearby-car issue is some big problem that shows that public health people have no business studying gun violence, I'm not going to be able to change your mind. Good luck convincing a science-minded neutral party of that one, though...
Printer Friendly | Permalink | Reply | Top
 

Home » Discuss » Topic Forums » Guns Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC