I finished the podcast through the scope testing portion. Clearly, Aaron did not read the notes on the scope testing, or he would have known that many of those variables are either addressed, or at least semi controlled for. Not all, but several of the ines mentioned are.
Two points to make on this
First is the statistical significance of multiple failures in a row with a small sample set, as opposed to multiple passes from a small sample set those are two significantly different situations, if you have three failures in a row statistically that says something different than if you have three passes in a row, each from a sample set of three. If something is generally pretty good, tell me what the mathematical odds of 3 failures out of three tests is.
Second, while I have long said that the drop test here are not scientific and could be significantly improved on specifically because of the uncontrolled nature of the drop surface and the way that the scope is dropped so there’s no way to tell exactly what landed first and exactly what angle, etc…. there is a massive value in having something “open source” available to anyone that isnt proprietary and can be done without the $50,000 test equipment. Because it isn’t helpful to me if a manufacturer tests, their own scopes, it’s really only helpful to me. If I can see the same test applied to multiple scopes and see how they compare, and that isn’t happening anywhere else. If a test is pretty repeatable with similar results (and it seems to be), then it has some validity—I see the theoretical limitations, and I see why a company like gunwerks doesnt do scope testing across brands, but that doesnt help me. If those results aren’t available to me across brands and models, it doesn’t matter how valid a test is, it is utterly and completely useless to me. Anyone that tears down the test here is doing nothing more than blowing hot air until they replace it with something better
that is available to me. Because until I have a better option, this is the ONLY option other than sticking my head in the sand. People can worry all they want about throwing out good scopes because of a flaw in the test, personally I don’t give a flying hoot about that, all I care about is having a better chance of getting one that works reliably. I’ve had a 75% failure rate on my personal scopes from one manufacturer—3 out of 4 failed—and I’d much rather not do that again. Someone can say I don’t know if It was the rings or the mounts or whatever, but as soon as I put a new scope in those exact same rings, torqued to those exact same specs and the groups tightened up and stopped wandering, every single time, I know all I need to know. Yes, theres more variables, but it isnt rocket surgery to narrow it down to the scope when you can switch it on and off by doing nothing more than switching scopes.
@Aaron Davidson you guys finished that podcast talking about doing a tour of your scope testing. I for one will look forward to seeing that, I’m curious about the nitty-gritty details, and also curious about how scopes other than yours fare under your testing and how I can apply your results to my own situation. Personally, having had multiple failures in the past, I’m not willing to simply trust, I want to see it in detail. Id encourage you to familiarize hourself with the details of the evals here so you can speak to it better as far as how some of the variables are controlled or semi-controlled.
Here’s a link