|
It's equally wearying to hear you going on as if not having certain credentials ipso facto means one's critical thinking abilities are non-existent, not to mention slapping down the "this was published in a reputable peer-reviewed journal" is if that alone is evidence of a study's validity. Most stuff that gets published in journals ultimately turns out to be wrong. That's not an indictment of the scientific method, because it's generally other scientists who conclude that they can't replicate the findings. Nor is it an indictment of peer review, because peer review isn't supposed to establish the veracity of a study's findings, but to filter out stuff that displays obvious flaws in its methodology or is otherwise plain garbage. And even if peer review doesn't always succeed in doing so, it doesn't really matter, because the real review of a study comes after publication, when other scientists read it and say "huh, that's interesting; let's see if those results can be replicated" (preferably using a more rigorous methodology). But that means that passing peer review may be a necessary condition of a study's validity, but it is not a sufficient condition; the sufficient condition is replication of the results by other researchers.
And yes, I'm aware the Branas study was a case-control study, but do you seriously want to tell me there was no application of econometrics involved in deciding how to compensate for the confounding factors between the study population and the control group? I mean, Branas cum suis picked just about the worst way to come up with a control group imaginable, namely randomly calling land lines in Philadelphia when the study population showed marked tendencies to be the kind of population that is least likely to possess a land line. That means you're going to have to do a metric assload of "controlling for confounding factors" and there's no way you can know how to start going about that until the data has been gathered and certain trends start to emerge. There's no way you can incorporate that into the study in the design stage, because at that stage, you do not yet know what the numbers are going to be, and thus how they are going to have to be crunched. Even accepting that "anywhere that data is involved, you will find regressions," it strikes me, as an admitted layman (with some college-level statistics classes), that "controlling" for a confounding factor after the fact will produce a result that isles reliable than if you'd eliminated that variable in the selection of your control group. I mean, it can't possibly be a novel idea that you should seek to make your control group resemble the study group in as many ways as possible apart from the variable you want to measure, yes?
And then the point remains that Branas et al. acknowledge in the text (but not in the abstract or the press release) that they "did not account for the potential of reverse causation between gun possession and gun assault." The whole claim of "carrying a gun makes you 4.5 times as likely to get shot" rather falls apart when it turns out the researchers didn't account for the possibility that individuals who considered themselves at an increased risk of being shot (due to being engaged in criminal enterprise etc.) are more likely to carry (almost certainly illegally) as a result.
|