Questioning the "gold Standard Drop Test" and the conclusions of "This scope brand does/doesn't hold zero"

Thats not how it works. Unless you’re ok with saying the scope that got a MUCH harder impact and failed is “worse” than a scope that accidentally landed barrel-first, absorbing much of the impact, and therefore “accidentally” managed to pass, you ABSOLUTELY want an eval that is repeatable and consistent. Without consistency and repeatability there can be no comparison. I suspect what you dont want is a test that is so specific that it either doesnt adresss some modes of failure at all, or can be “gamed” by some cheap construction that allows it to pass a test without actually making it more durable in use. Those are very different things though. Which brings up this:



Making a standard that doesnt allow cheating is HARD. For example, the UIAA test for climbing helmets specifies it must pass a penetration test to ensure that a fall onto a sharp object doesnt penetrate the helmet and enter your skull. So they designed a contraption with a pointy end and specify the test apparatus, etc. So whats the first things manufacturers do? They put a tiny square of impact-resistant material right where the point lands in the test…WITHOUT doing anything at all to protect you from an impact anywhere else. But it’s “penetration resistant” now. That is “gaming” a test without providing any benefit to the user. The exact same thing could EASILY happen with a standardized scope test. Ie the “impact arm” that hits the turret, or the contraption that only allows the impact to always hit one spot, allows (just for example) a rubber-armored turret cap to be used, with NO other modification. It might not change the reliability of the scope at all, but it could reliably pass the test. A standardised test might or might not be easier or harder, but what it would definitely be is more predictable. And that certainly allows it to miss some things, as well as allows it to be “gamed”.

I say, be careful what you wish for if you’re looking for a perfect test. Imo the perfect test doesnt exist.
Man thank you for such a thoughtful and articulate response!

It's easy to suggest things that "feel" more controlled, but may actually be less effective in testing what we're trying to evaluate.
 
IS there any other test? If not it’s the ONLY standard. It may not be gold, but that still counts for something.
My point is nobody is pretending it's a perfect, scientific test. But that's what the OP is arguing against. That's called a strawman fallacy, arguing something other than the actual claim but pretending that's the claim.

I value the tests tremendously.
 
I'm fine with an OP seeking an honest discussion. Maybe the tone and wording could have been different.

Could the evaluations be improved? I guess so. But they are fine as they are.

For those who are suggesting some specific ("easy") improvements - how many suggesting changes that include more time, money (or both) have donated money to the evaluations? I'm not suggesting that one can't have an opinion without ponying up $, but it would be a lot cooler if you did. In addition to time and equipment, there are rounds to be fired and contributing even a few bucks can help offset what is otherwise coming out of someone's pocket.

Are you saying that doctors aren’t scientists?

If so, who has produced all the clinical data, randomized controlled trials, compiled the evidence that shapes current medical practices ie evidence based medicine? If not science, by what process?
If you don’t think doctors are scientists, then you don’t at all understand medicine.
Again, while no one here will likely accept this, it isn’t about my ego, rather to frame the way I evaluate and critique tests and protocols and introduce this discussion as an (attempted) collegial and respectful format such as a journal club. You guys are interpreting it as ego boosting.
I know this wasn't directed to me. I'm just a snarky layperson who sometimes posts memes, so fwiw - IMHO not all doctors (I'm assuming you mean MDs, but the same can be said for PhDs) are scientists, and not all scientists are MDs. Even an MD who is supplying patient data for a study doesn't equate to that doctor being a scientist. But maybe I'm wrong.

And I personally give someone more credit who has no credentials but who is skeptical of a current hypothesis (medical or otherwise) than those (including with credentials) who take what is the current belief/standard as being true or proven. So in that vein, the OP quesetioning the evaluation is the right mindset, but maybe the delivery could have been better.
 
A standardised test might or might not be easier or harder, but what it would definitely be is more predictable. And that certainly allows it to miss some things, as well as allows it to be “gamed”.
You got me there. I'll plead hobbyist ignorance and try to learn for next time.
 
There are guys on here that shoot A LOT. I have over 40 different cartridges I load and shoot. I shoot 2+ days a week, and I am WAY down the list on here for round count. And I still learn more all the time. But there is no replacement for going out and shooting with your equipment. Mark Twain said "A man who carries a cat by the tail learns something he can learn in no other way".

Here it is I’m thinking this thread is absolutely completely worthless, and you ruin that, with that quote. Thank you. ;)

Also, speaking to nobody in particular -

The problem with standardized tests is that they allow 'teaching to the test'. You can see that in American primary education. You can see that in how Leupold builds incredibly recoil-proof scopes that won't handle side impacts.

I'd prefer that any test protocol have some degree of variability built into it. Which means the current tests are pretty danged good, IMO.

And, yes, by their nature, they aren't good at proving that scopes work well. What they are good at is identifying scopes that DO NOT work well.

(ETA: The drop tests here are like shooting a 3-shot group. They won't prove a load shoots well but they're often sufficient to prove that a load is hot garbage)

Frankly, a guy who works to identify the latter - scopes that don't work well - is doing a great service to the shooting community, and it's absolutely wild (but totally expected to anyone who's been on the internet long) that he gets nothing but grief for it. If there was a Nobel Prize for the shooting/hunting community he ought to be in the running for it.
 
Has the OP dropped his own rifle setups? Have any of the guys arguing how dumb the test is done their own tests?

It’s like arguing over sticking you finger in an electrical outlet and what the outcome will be. At some point, nut up and try it if you are so sure.

All the guys arguing for more “controlled” tests look at the Gunworks example of useless $40k testing machinery. Lots of scopes pass the $40k machine that tests impacts, but if I mount it up and drop them on a dog bed out back they shift when other scopes don’t…. That ladies and gentlemen is a failure of test design.

Stop trying to “science” your way out something that is within your ability test. Will you do the test right? Prob not . Can and will you see a shift w “known” scopes, yes, but it’s prob your rifle system and now we have something to talk about and learn from.
 
This is one of those things where people need to put their thinking caps on. A failure in the test is more meaningful than a pass.

Scopes should be able to get through these tests at a minimum, so you can rule out scopes that don’t pass when durability matters.

The test is less useful to compare scopes that pass.
 
Most people never throw their rifle/scope on the ground either. Obviously you failed Introduction to Logic 101 and spelling. I have years of professional baseball under my belt, does that count?
Id suggest the complete opposite would probably be more accurate. Many people have tripped over a log eith rifle in hand, had a rifle fall off of/out of their quad/sxs/truck bed/cleaning table/tree stand etc etc.

Granted none of that suggests that 18 or 36 inch drops are any kind of scientific standard.
As others have stated, Form is just a dude throwing sh!t on the floor at his own expense and reporting his findings. I've never read of him claiming his results as gospel, simply that its a good starting point for questioning durability in field conditions.
 
If there was an industry standard test, it would be developed by the industry. There are tests today for scopes developed by the industry. There have been tests for decades for scopes developed by the industry. That evolution, and the downstream consequences of that evolution, are what we all experience today.

What are those consequences?

That a subject matter expert with critical thinking skills, and incentives NOT aligned with the industry, can point out that the emperor isn’t wearing any clothes.

The “results” of industry standard, 3rd party independent, scientific method, blah blah blah testing, can and often are statistically sound and intuitively make sense.

The “consequences” of those industry standards being in place are unpredictable. Consequences, like almost every scope manufacturer turning out a product that is more prone to failure than a laptop if you drop it on the floor.

It didn’t have to turn out that way. Change a couple of incentive alignments within scope companies, change actual leaders of the companies, change people responsible for testing and QC, change what one guy testing ate for lunch one day. Any or all of that could lead to more or all scopes generally working the way they’re supposed to. Or not. It’s too complex.

We’re here BECAUSE of industry standards.

The emperor wearing no clothes = most scopes are not good at performing their intended function in average field use, and we are so far from the scope industry being held accountable for this.

The field evaluations of scopes are good at showing 3 important things as far as I can tell:

- how important proper rifle cleaning, fit and assembly are to rifles holding zero.

- identifying which scopes will NOT hold up to the average person treating them averagely over time in field use. By hold up I mean “perform their intended function” of holding a zero as an aiming device. A counterintuitive result of testing: if the first scope you test fails to perform, there’s little value in continuing to test more scopes.

- the consequences of an institution’s tendency to expand and protect itself over time, leading to institutional decay.



I’m grateful for the time and effort put in to the field evaluations. They have saved me a ton of money and time. I’m also grateful for the ability to see them for what they are.

It’s now common knowledge that many or most scopes don’t hold zero over time in average field use. It took an outsider to point this out. The field evaluations allowed everyone to know, that everyone knows, that scopes suck. They have sucked for a while(or forever), but it takes common knowledge to shake things up. Can’t go back now.
 
Has the OP dropped his own rifle setups? Have any of the guys arguing how dumb the test is done their own tests?

It’s like arguing over sticking you finger in an electrical outlet and what the outcome will be. At some point, nut up and try it if you are so sure.

All the guys arguing for more “controlled” tests look at the Gunworks example of useless $40k testing machinery. Lots of scopes pass the $40k machine that tests impacts, but if I mount it up and drop them on a dog bed out back they shift when other scopes don’t…. That ladies and gentlemen is a failure of test design.

Stop trying to “science” your way out something that is within your ability test. Will you do the test right? Prob not . Can and will you see a shift w “known” scopes, yes, but it’s prob your rifle system and now we have something to talk about and learn from.
So people should test more, but you liken it to sticking your finger in an outlet? I always said people that did that or peed on an electric fence were stupid….i already learned what would happen from watching others.
 
It’s also not ALL bad that scopes suck. Thousands of people have made a living running the companies, designing the scopes, building the scopes, working on the scopes, TESTING the scopes, using the scopes, owning shares of companies who supply scopes, etc. Hard to say that scopes have failed being useful. It’s just that they don’t perform their intended function well.

I’m happy someone is going to try and fill this need and make users happy too,

Life’s a bunch of tradeoffs in the end.
 
If there was an industry standard test, it would be developed by the industry. There are tests today for scopes developed by the industry. There have been tests for decades for scopes developed by the industry. That evolution, and the downstream consequences of that evolution, are what we all experience today.

What are those consequences?

That a subject matter expert with critical thinking skills, and incentives NOT aligned with the industry, can point out that the emperor isn’t wearing any clothes.

The “results” of industry standard, 3rd party independent, scientific method, blah blah blah testing, can and often are statistically sound and intuitively make sense.

The “consequences” of those industry standards being in place are unpredictable. Consequences, like almost every scope manufacturer turning out a product that is more prone to failure than a laptop if you drop it on the floor.

It didn’t have to turn out that way. Change a couple of incentive alignments within scope companies, change actual leaders of the companies, change people responsible for testing and QC, change what one guy testing ate for lunch one day. Any or all of that could lead to more or all scopes generally working the way they’re supposed to. Or not. It’s too complex.

We’re here BECAUSE of industry standards.

The emperor wearing no clothes = most scopes are not good at performing their intended function in average field use, and we are so far from the scope industry being held accountable for this.

The field evaluations of scopes are good at showing 3 important things as far as I can tell:

- how important proper rifle cleaning, fit and assembly are to rifles holding zero.

- identifying which scopes will NOT hold up to the average person treating them averagely over time in field use. By hold up I mean “perform their intended function” of holding a zero as an aiming device. A counterintuitive result of testing: if the first scope you test fails to perform, there’s little value in continuing to test more scopes.

- the consequences of an institution’s tendency to expand and protect itself over time, leading to institutional decay.



I’m grateful for the time and effort put in to the field evaluations. They have saved me a ton of money and time. I’m also grateful for the ability to see them for what they are.

It’s now common knowledge that many or most scopes don’t hold zero over time in average field use. It took an outsider to point this out. The field evaluations allowed everyone to know, that everyone knows, that scopes suck. They have sucked for a while(or forever), but it takes common knowledge to shake things up. Can’t go back now.
Exactly companies would rather pump money into marketing, “forever warranties”, and giving in-FLU-encers freebies than admit they have been selling bunk and actually fix it…
 
It’s now common knowledge that many or most scopes don’t hold zero over time in average field use. It took an outsider to point this out. The field evaluations allowed everyone to know, that everyone knows, that scopes suck. They have sucked for a while(or forever), but it takes common knowledge to shake things up. Can’t go back now.
I'm assuming you have a large sample size to back up a blanket statement like that?
 
Which blanket statement?

I’m assuming you’re referring to either the explicit statement, or the implicit statement I’m making.


The cross section of scopes evaluated for field use that failed to hold zero, would easily encompass “most scopes”, but if not, and my redneck math is off, then “many” would do just fine. Yes there’s dozens of models not tested. But by quantity of scopes in the field, I think my math holds.

When the first scope of a model is dropped from 18” off the ground onto a mat on top of dirt and it loses zero, there’s little point in testing a second one. This is counter intuitive but has been exhaustively discussed.



I'm assuming you have a large sample size to back up a blanket statement like that?
 
I was referring specifically to the blanket statement you made that I quoted. When you say "most scopes" I would assume you have a sample size much larger than the RS drop test to make a blanket, ignorant statement like that. You should clarify it by saying "in my opinion" or something like that. "Most" scopes would be millions of them, and after guiding 160+ hunters, that's a slight exaggeration IME.
 
So people should test more, but you liken it to sticking your finger in an outlet? I always said people that did that or peed on an electric fence were stupid….i already learned what would happen from watching others.

People are so sure they won’t get shocked (their setups hold zero) that they’ll sit around all day and pontificate about it. When it comes right down to it they won’t stick their finger in. They are happy to encourage others that it’s ok to stick your finger in, no consequences (buy crap scopes).

Someone else says “hey, that’s dumb and there are consequences” and they are demonstrable, but no… let’s sit around and argue about how that can’t be correct..

Test your own theories instead of telling people who have done it they are wrong for the way they did it.

That help?
 
I was referring specifically to the blanket statement you made that I quoted. When you say "most scopes" I would assume you have a sample size much larger than the RS drop test to make a blanket, ignorant statement like that. You should clarify it by saying "in my opinion" or something like that. "Most" scopes would be millions of them, and after guiding 160+ hunters, that's a slight exaggeration IME.

I’m not suggesting that scopes that don’t hold zero aren’t still capable of putting bullets generally where a guy aims, well enough to kill critters sometimes.

That’s another fair assessment of all of this - the argument can be made that most people can’t shoot well enough to ever put the difference of a reliable to scope to use.

But because you’ve seen a bunch of people kill animals with scopes doesn’t mean those scopes reliably hold zero in field use. It might mean that they did well enough to equate to success across a small sample size(though impressive sample size.)

I am not trying to be confrontational either. I don’t think someone should be surprised or freaked out by learning that companies who no longer optimize for innovation, but instead for margin, aren’t churning out reliable complex pieces of equipment. I don’t make the statement with a call to any sort of action. I was just pointing out that if there was a desire for a change, it would require the unreliability to be common knowledge.

But Is is really controversial to look at most of the scopes failing field evaluations, and extrapolate onto the broader industry?
 
Having more repeatable impacts would be more accurate and scientific. I am not sure and I don't think anyone could say, it would or would not change the results. Until it's done, it is truly an unknown imo. Form's redneck science DT, seems to find a scope's weakness. Far from perfect but does show a trend.
 
Back
Top