Questioning the "gold Standard Drop Test" and the conclusions of "This scope brand does/doesn't hold zero"

Scopes are tools, nothing more.

As a carpenter, I use a skill saw a bunch. I need it to adjust to different depths and bevels for a wide range of cuts. It also needs to maintain a 90* blade angle for the vast majority of my cuts. The locking mechanism that holds the guide in place needs to be solid, durable, and accurate.

Let's say I can take different brand skill saws and roll them down a flight of wooden stairs. My Milwaukee saws can take this tumble and not flinch. No loss of guide angle, no slipping of depth. It locks solid. On the other hand, my tool shop saws always come loose or the angle slips. The locking mechanism is not as solid.

I don't need an exact stair height, number of steps, controlled tumble. The test, rudimentary as it is, reveals weakness in some designs and proves strength in others. I don't need anything more than that. It can be that simple. It's just people that want to make it complicated because they don't understand it.

And for those that like to say, "I don't drop my saws down stairs", you're missing the point. The stairs just show which saws are made with a durable, accurate locking mechanism that will last a long time and not cause issues. The stairs also show which brands or models don't have tough and dependable locking mechanisms which will slip on you, cause issues, or not last as long.

If you're someone who uses their saw twice a year and checks it before each use, toolshop is probably fine. If you want durable and a long life of hard use with no issues, milwaukee might be your jam.
 
Scopes are tools, nothing more.

As a carpenter, I use a skill saw a bunch. I need it to adjust to different depths and bevels for a wide range of cuts. It also needs to maintain a 90* blade angle for the vast majority of my cuts. The locking mechanism that holds the guide in place needs to be solid, durable, and accurate.

Let's say I can take different brand skill saws and roll them down a flight of wooden stairs. My Milwaukee saws can take this tumble and not flinch. No loss of guide angle, no slipping of depth. It locks solid. On the other hand, my tool shop saws always come loose or the angle slips. The locking mechanism is not as solid.

I don't need an exact stair height, number of steps, controlled tumble. The test, rudimentary as it is, reveals weakness in some designs and proves strength in others. I don't need anything more than that. It can be that simple. It's just people that want to make it complicated because they don't understand it.

And for those that like to say, "I don't drop my saws down stairs", you're missing the point. The stairs just show which saws are made with a durable, accurate locking mechanism that will last a long time and not cause issues. The stairs also show which brands or models don't have tough and dependable locking mechanisms which will slip on you, cause issues, or not last as long.

If you're someone who uses their saw twice a year and checks it before each use, toolshop is probably fine. If you want durable and a long life of hard use with no issues, milwaukee might be your jam.
OK. So lets say I've been carpenter in the home building industry for 30 years. I happened to drop my Milwaukee saw as well, but mine broke and had to be fixed to be operable. Do you believe me and my experience, or your actual in the field use of the product?
 
OK. So lets say I've been carpenter in the home building industry for 30 years. I happened to drop my Milwaukee saw as well, but mine broke and had to be fixed to be operable. Do you believe me and my experience, or your actual in the field use of the product?
Your experience is a lie. :ROFLMAO:
 
OK. So lets say I've been carpenter in the home building industry for 30 years. I happened to drop my Milwaukee saw as well, but mine broke and had to be fixed to be operable. Do you believe me and my experience, or your actual in the field use of the product?
Everything can break. Why wouldn't I believe you?

Now if you're going to flip it and say, "what about the ryobi that I've used for 10 years without any issues, but it fails your test, will you believe my experience?" Sure, I'll believe that too.

But neither of those examples changes the fact that some saws have a much better locking mechanism that is more likely to be solid, consistent, and last longer.
 
Reductionism for the sake of itself is a disease.

It seems to me that scientifivally quantifying 'zero retention' would be an effort in marketing - it will help a manufacturer to sell acopes more that it will help a consumer buying scopes.

"We are buried beneath the weight of information, which is being confused with knowledge; quantity is being confused with abundance and wealth with happiness.
We are monkeys with money and guns."

-Tom Waits

Where it comes to actually KNOWING things, for me the biggest takeaways from the Evaluations in question were;

1. Use minimum 10rnd group to establish zero and estimate cone of fire. Anything less is statistically insignificant.

2. Use bases that interface the receiver with pins or lugs. Torque ring caps to a minimum of 20in/lbs because even the best scopes 'fail' when it moves in the rings.

Where mechanical relability is concerned in my real world use-case, bypassing these predicates renders scope choice irrelevant.

To me the real value in Form's eval is showing that the mechanical connections holding the rifle system together are just as important as the scope.

In closing, i'd like to point out that using the word 'test' is a misnomer, and technically incorrect. FORM calls it an evaluation, and I agree with that.

Calling it a test opens the door for critics to say 'it's not a controlled test', and they are right.

In my view (and similar to ACTUAL science) the EVAL is more about the process than the answer.

Academia has suppressed critical thinking for generations, which has resulted in individuals politely requesting, or depending on their manners, outright DEMANDING an Appeal to Authority.

It makes me wonder if they know the difference between knowledge and information 🤔
 
What always makes me shake my head about the folks who think the drop testing is pointless is this...

Hunting and shooting are full of what if's. And most people try to cover every what-if that comes to their mind, EXCEPT mechanical durability and reliability of their scope.

The last statistics I read on the subject said the average rifle shot for deer hunting east of the Mississippi was 80 yards. West of the Mississippi it was 180 yards. The majority of ALL deer are shot within the range of a 30-30 with a 2.5x scope.

So.... even though most people NEVER shoot a deer past 200 yards regardless of where they're located, everyone is carrying a 500 yard (or more) capable rifle/round/scope just because "what if" a deer offers a 500 yard shot... or longer.

But most of them don't have a scope that can survive the much more likely "what if" of slipping/tripping/falling/dropping their rifle and scope, not to mention the actual traveling and vibrations.

That just doesn't make sense to me.
 
What always makes me shake my head about the folks who think the drop testing is pointless is this...

Hunting and shooting are full of what if's. And most people try to cover every what-if that comes to their mind, EXCEPT mechanical durability and reliability of their scope.

The last statistics I read on the subject said the average rifle shot for deer hunting east of the Mississippi was 80 yards. West of the Mississippi it was 180 yards. The majority of ALL deer are shot within the range of a 30-30 with a 2.5x scope.

So.... even though most people NEVER shoot a deer past 200 yards regardless of where they're located, everyone is carrying a 500 yard (or more) capable rifle/round/scope just because "what if" a deer offers a 500 yard shot... or longer.

But most of them don't have a scope that can survive the much more likely "what if" of slipping/tripping/falling/dropping their rifle and scope, not to mention the actual traveling and vibrations.

That just doesn't make sense to me.
What doesn't make sense to me is worrying about what other people use for their hunting gear. I couldn't possibly care less.
 
Out of the numerous proposals of "better" alternatives to the field drop test, how many have been implemented? Much less in significant numbers, then evaluated, improved, etc? Not one. Not by a single manufacturer, or any naysayers on the web. Even the scope manufacturers trying to make good scopes are either using collimators or consulting Form.
OP, some of what you're suggesting is ok. But when you realize that actually achieving what you're proposing requires being three kinds of engineer, a machinist, statistician, and financier of such a project all in one... you'll do like most of us and just drop your rifles. We do the sort of testing this would require in the industry/company I work in, and I understand how to do it. But simply knowing that my setup handles reasonable drops without issue is more valuable than having any amount of data in large sample sizes. That is for manufacturers who actually want to make good optics. And even for that, a few whacks and checks on a collimator is probably about as good as anything.
 
If the testing methods to impact test scopes were perfected, scientifically, mechanically and by whatever other means are necessary to be 100% satisfactory to the whole industry. I would put money down that Nightforce and Trijicon would still be two of the top performers.

😜
 
The tests are not scientific enough to stand up to a harsh peer review but that’s not what we are doing.

We’re trying to stimulate field conditions and falling down a mtn.

What to do when you catch your breath? Check zero or keep hunting?
 
Something that becomes more evident every day on this once epic forum is that the vast majority of dudes need to spend more time in the field using their gear HARD.

All of this hypothesizing/pontificating goes away immediately once you become a heavy user of your gear. It becomes obvious what works/doesn't work in a couple seasons when you're getting after it.

It will however, never not be hilarious watching an off-season thread go off the rails.

I wish Stick from 24 Dumpster Fire could chime in on this one, and wax poetically for me 🤣🤣🤣
 
Just think of it as a real-world minimum spec:

If it can't handle this, why should I even consider buying the scope?
Well… is the intentional 36” drop (x 9) +3 a minimum spec or excessive? Idk. It’s certainly something. 3000 rds holding zero certainly is something also. Question that it leaves me with, is, not if the few scopes that passed the drop eval are good at holding zero- but did we eliminate any good optics that just fared okay - but would otherwise hold zero through day-to-day, normal truck bouncing/rattling, 3000 rds and say, a few less dramatic falls - or in error?
Yes, I am very aware that Form gives ‘signal’ to those that “sorta pass” and made clear his intention of the eval, and that is useful data. It makes me wonder though, if so many fail, is it a reasonable test, or are optics just generally that unreliable?

There were a couple arguments about if we had a more standardized test, manufacturers would game that test… which is ironic. Haven’t they directly gamed this test? I’ll agree with any arguments that this likely just resulted in a more durable optic, but why do we assume similar results wouldn’t apply to another durability test. Why must we also assume standardizing and making a more precise test would make it so expensive? That argument continues to befuddle me… there are some rather expensive sounding suggestions made, but it’s not necessarily so.
When I was thinking through my initial post and criticisms of the eval, certainly I hadn’t heard/deeply considered the opposing perspectives (or the insults to my character)and all of the supporting the merits of the eval, but, in the evolution of the past 14 pages of the thread, again, my mind has been opened. I’ve read and reread the preamble of the eval, re-read perhaps a dozen more field evals, and listened to both reason and rhetoric why we don’t need any other tests. I agree, and disagree. The three takeaway statements from my OP that I’ll stand behind are:
1. (in agreement with most here) the test accurately demonstrates those that pass the drop test, despite,
2. inconsistencies in the evaluation (that I think are less important if your just looking for an optic that passes)
3. I struggle(d) to understand how to interpret these results. It’s still difficult, but I’ve largely moved past the struggle and have least made a decision in this optic purchase.

An expression of the above is; this is the best data we have from a consumer level that I can find, anywhere.



There are a number of critical factors anyone following (or even opening) this thread will likely agree on when evaluating and selecting a scope (esp for hunting)… magnication, objective dia, turret design/function, weight, reticle design, FFP vs SFP etc etc etc and of course the holy grail here… if it does have it none of the above matters…. Durability/reliability.

The test certainly eliminates a lot of options that don’t meet the standard. That can be difficult when the optic that seems to be designed to meet the needs of your system checking every other box, fails in the last and most important, so you move on. You find another option, and find that out too after being dropped twelve times on its head from 3 feet ended up a little scatter brained. So you move on to the next, until finally, you just go to the cheat sheet that eliminates most of the market (if it hasn’t been evaluated, then it hasn’t passed, and this psychology is difficult to work around) which leaves few options.

It still makes me wonder about some of those others; well less so. My decision is made. I choose to trust the results as the best available data, and go with the best choice, with suggested durability, that fits my needs.

I will be properly field evaluating, as well as experimenting with some ways to remove any bias or inconsistencies in the eval, and will publish my results. It is an optic with at least a sibling evaluated, and I do hope it holds up as well.
 
this is the best data we have from a consumer level that I can find, anywhere.
Bingo. If/when something better comes along I’d bet money Form will be first in line to use that instead. I would. Until then…

did we eliminate any good optics that just fared okay - but would otherwise hold zero through day-to-day, normal truck bouncing/rattling, 3000 rds and say, a few less dramatic falls - or in error?
Maybe. Maybe not. Lots of truly scientific tests use harsher-than-normal trials in order to show wear faster. After all, a test cant last 5 or 10 years, and lots of us use scopes longer than that. But really I just dont think a knee-height fall onto soft padding is some unreasonably hard fall. I think reality is frequently MUCH harder than that.

There were a couple arguments about if we had a more standardized test, manufacturers would game that test… which is ironic. Haven’t they directly gamed this test? I’ll agree with any arguments that this likely just resulted in a more durable optic, but why do we assume similar results wouldn’t apply to another durability test
Its not that “we” assume another test wouldnt be better, its simply that another test, simply by virtue of being more quantifiable, doesnt NECESSARILY equate to a better result for a consumer, for anwhole slew of reasons. I.e. If you ask the wrong question, even if you ask it really precisely, you still get an answer that isnt helpful. And no, afaict no manufacturer has “gamed” this test yet. They would have to acknowledge the existance of this test and design around it in a way that DOESNT actually result in a more reliable product in order to do so, and that definitely hasnt happened. Take the mandatory epa fuel economy test. It’s standardized. Everyone does it. In theory it allows consumers to use it as a criteria to make a purchasing decision. Making a car that is more fuel efficient in order to sell more is what happens when the test works as intended. But gearing a car specifically around the test speeds, knowing full well that in use people will get drastically different results, is “gaming” the test…to the point that there are crowd-sourced “real world” comparisons that many people use and rely on as MORE accurate, ie Fuelly.com, even though that isnt “scientifically quantifiable” and includes all the variables that the epa test intentionally excludes. That is the not-unlikely possibility of an industry-wide standardized test, not as a “dont do it”, but simply as a “it’s not necessarily the unequivocal solution that some people make it out to be”.

End of the day thats all people are saying—we have a “evaluation”, and its the ONLY thing we have right now. Its all fine to postulate improvements. But too often those improvements come with an implied or explicit dismissal of what we have now, with the assumption that just because a hypothetical future “test” is “more scientific” (quantifiable, mpre repeatable, all of the things that means to people), that it will result in a better outcome. Im saying that while thats POSSIBLE, it is NOT safe to assume. And, that future test doesnt exist at the moment. So my suggestion is only to not let the “perfect test” be the enemy of a “pretty decent evaluation”.
 
It makes me wonder though, if so many fail, is it a reasonable test, or are optics just generally that unreliable?

This is my opinion:

This is probably the animating principle behind the field evaluations being made public.

You SHOULD think that way, because you have an institution you can trust. Surely if this much product and money are exchanging hands, the minimum level of quality MUST be above a certain threshold. Surely MOST scopes would hold zero across a lifetime of general use by average people who have a few dozen clumsy moments during their practice and hunting time.

And if you trusted that, you’d either have problems with guns you’d have to explain away to yourself, or you’d be disappointed to find out that products that should be reliable, aren’t.




This is all incredibly obvious to me, because I deal with the exact scenario in my day job. Every day. The folks I work for fall in one of two buckets: They assume that all of the equipment surely meets a minimum threshold of quality and works generally as would be expected. Or, they’ve graduated to believing that all of the equipment is shit, and anyone related to that equipment in any way is shit. I have to spend an inordinate amount of time reprogramming their hard drives before I can get them what they need.
 
Question that it leaves me with, is, not if the few scopes that passed the drop eval are good at holding zero- but did we eliminate any good optics that just fared okay
That leaves you with a schrodinger's cat of a scope. It will loose zero, you just don't know when. Not until you test it and find out the cat is dead on the hunt of a lifetime.

If you want to improve the test, eliminate more false negatives.
 
You guys saying "there needs to be a minimum expected, industry durability standard" are absolutely correct.

Unfortunately we have companies who offer a fantastic warranty because they know that its cheaper to warranty x number of scopes that actually get used as opposed to the hundreds of thousands of x brand scopes that adorn safe queen, minimal use firearms than it is to build scopes to a minimum expected durability standard.

Being that a scopes primary objective is to hold an accurate zero, then provide magnification and accurate reticle markings/dialling, its beyond absurd that certain brands are still even in the scope manufacturing game.
 
14 pages of nothingness . The op is a great example of over thinking . Even if all the data was published it would only affect a small percentage of buyers . The people in the industry that actually shoot know what scopes work and which scopes do not . The rest are clueless and all the info in the world will not matter to them , they buy brand loyalty for the warranties .
If anyone has read the info Form supplied the message is obvious - if you're going to spend money on a scope why not buy a scope that does well in a drop test . Period.
The message was not to cast aspersions on any manufacturer or start a war , just cut through the BS and give his personal observations , at least that how I see it .
As for "The nuts and bolts of why one scope is better" , who cares ? And why ? Is it a competition ?
I would hazard to guess 90% of shooters never heard of the drop test and could care less .
And I hope that wasn't snarky .
 
Back
Top