...to repeatedly hear about how scientific evidence has been "debunked" on some random gun website by somebody with no scientific background. Really, it does.
If you read through that exchange with me and the other poster, you will find that the other poster was denying that odds ratios are the same either way, as I finally had to explain
here. This is not some shady statistical gimmick, it is algebra. I wonder if you can bring yourself to acknowledge that I was in fact correct, or if now you consider math itself to be an arbitrary authority.
Anyway, I'll try and make up for it by giving a more detailed and less dismissive explanation of exactly what you fail to understand about regression models.
I can see that you like your Goertzel, and he does have some good points. But Branas was an observational study -- a case-control study -- and not an econometric model. The problem Goertzel is talking about, where you keep adjusting your model until it fits, is called "data dredging" or "data snooping". This is indeed a big problem in econometric settings. Of course, everyone has known about this for decades, and there is a literature about how to avoid it, how to validate models, etc. (though it's still a big problem despite these validation techniques). It's not like Goertzel invented this.
However, in an observational study setting like we are talking about, this data snooping problem is much smaller. Why? Because, generally, when designing the study the investigator will decide what data to collect and then how to analyze it. If the model doesn't fit (e.g. if you fail to get a statistically significant result), that's it. You don't go back and collect more data or fudge things around. You don't re-analyze the data using different assumptions. What you do is you write up the study showing that you did the experiment and you failed to find a significant result. There is much less tweaking of models to get them to fit versus what you would find if you, say, go mining for trends through demographic databases. And even if you don't intentionally go mining for trends, in econometric settings, the same data often gets used in multiple different analyses or "experiments", and this is one big contributor to data dredging.
More generally, "regression" simply refers to the statistical practice of fitting a statistical model to data: inferring information about the relationships between observable variables. Econometric models are only one application of regression. Medical studies are another. But, basically, anywhere that data is involved, you will find regressions, of some form or another. Regression models are used in all kinds of scientific applications, from chemistry and physics, to sociology, economics, finance, climate science, everywhere. That's why having beef with "regression models" is a bit silly. The question is, what makes some regression models more credible than others.
And by the way, data dredging is not the only problem that arises when fitting regression models, and there are valid criticisms you could have given of the statistical methodology in the Branas paper. But the data dredging problem described by Goertzel isn't one of them.
OK, I hope that was useful. Regarding the usual "peer review is not perfect" and "public health people don't get it", I hope that at some point you will move past these silly catch-all critiques to the point where you can find some cogent and specifically relevant discussion points.