Questioning the "gold Standard Drop Test" and the conclusions of "This scope brand does/doesn't hold zero"

Just a FNG here, but reading through this critique (and a multitude of other threads and posts critiquing the drop test) shows a massive lack of understanding in the intention, process and results entirely.

I don't want, nor care, to know which scope passes or fails when dropped from the exact same height, on the exact same surface, at the exact same degree, and how far my subsequent shot is going to be off. Thats literally the opposite of what I want to know. Therein lies the beauty of this test.

When I'm hunting and my gun falls from the tree I leaned it on, I want to know that my next shot is going to hit where I want it to. As I'm driving from drainage to drainage, or across state lines to different destination hunts, I dont want to have to search for a place to verify my gun is still zeroed before loading up my pack in the morning. To gain that confidence beforehand, I use a very basic and repeatable test before that situation ever happens. I drop it on the ground and drive around the mountains with it uncased and verify by shooting, well before that hunt. Its simple. I may be wrong but is that not the entire point of the drop test.

I, for one, dont want a scientific, repeatable test. I want some variables because I am using the scope in a variety of situations. And I can do the test myself.
 
Also, the title of this thread is just wrong and proves that the OP hasn't done his homework when it comes to the drop tests.

"This scope brand does/doesn't hold zero" has never been stated by Form in ANY of the evals I've read. That isnt the point of the test at all, in my opinion. Maven has some that pass, some that fail. ZeroTech (who S2H partnered with to build their new scope) has models that fail and now, at least one, that passes. Multiple models of brands are tested and prove that manufacturers CAN build scopes that pass but choose not to. If multiple scopes of a certain brand fail, that isn't anyone telling you not to buy that brand. FAFO.

Hopefully I'm not speaking out of turn but its all there to review if you have an open mind, instead of just trying to prove the tests inaccurate without actually testing anything.
 
Are you saying that doctors aren’t scientists?

If so, who has produced all the clinical data, randomized controlled trials, compiled the evidence that shapes current medical practices ie evidence based medicine? If not science, by what process?
If you don’t think doctors are scientists, then you don’t at all understand medicine.
Again, while no one here will likely accept this, it isn’t about my ego, rather to frame the way I evaluate and critique tests and protocols and introduce this discussion as an (attempted) collegial and respectful format such as a journal club. You guys are interpreting it as ego boosting.
I think the problem here is that you are trying to take a white lab coat in a sterile operating room approach...

When an "in the shit", bullets flying everywhere, mud, dust, screaming and yelling, combat triage type approach is likely better for our context.

No matter what perfect and repeatable tests you come up with it may not adequately expose a failure mode. Maybe scope X really does need to be hit at Y angle and Z yaw and F force to find a failure mode but if you dont test for that then you'll never see it. A little bit of randomness in the testing might expose that.

For this context. Id trust a redneck that knows how to shoot, treats it like a tool (above average abuse and no babying), drives it all over kingdom come, shoots thousands of rounds and hundreds of animals and competitions.... over a strictly defined procedure/regimen of tests done in a lab coat environment. I think the former is actually more likely to find issues then the later. And if the later does expose issues then the scopes will be made to pass those tests, which again may not translate to real use.

AND without detailed prints of the components, material certs, and whatever other specifications the lab tests have less merit. IF you were a scope company, and had all of those things, and had component subassemblys, then a regimented test on say an erector subassembly would make a ton of sense in the context of product improvement.

As a consumer do we care? We really just want to know if the thing works.
 
Again- for it to be a variable that needs correcting- it must cause inconsistent results. If it doesn’t cause inconsistent results, then it isn’t causing inconsistent results- and doesn’t need correcting. What results do you believe are inconsistent?
I think the Leupold Mark 5 could be an example of the test incorrectly passing a scope (or it could just be Leupold got lucky on the quality of that first one). Not aware of any examples of an incorrect failure.

So, talking from an industry standard perspective, standards being set to avoid cheating. Which brings up the point everyone ignores, an industry standard would be HARDER on the scope than the current drop test. All of the potential errors in the current test work in favor of the scope with the exception of shooter error if present (which could go either way).

Specifying a completely flat surface (so part of the impact is not absorbed by the stock or barrel) and specifying construction of that surface. For example IEC 60068-2-32 specifies 17-19 mm of hardwood over a 3mm steel backer. Or MIL-STD-810 specifies a steel sheet on top of concrete. As a side not, it is bloody crazy that a laptop can be designed to survive 30 drops from 2 meters on steel reenforced hardwood, but dropping a scope on foam covered dirt is considered too much for a scope.

Atmospheric standards, frozen and hot things react different. I can see freezing a scope giving it an unfair advantage as grease thickens and nothing wants to move. Of course, it could also make plastic brittle and increase failure depending on the design.

A rifle that reliably goes into a fixture and photographing the reticle before firing from the fixture. Obviously for an industry standard you need a specific weight and design to avoid cheating.

Also, a fixture to drop it so people are not cheating the orientation. On that, for a standard the orientations most likely to produce failure need to be established and included in the procedure. For example a 30 degree cant that results in a corner strike on the turret might fail more scopes than a drop on the flat, so adding it to the drop orientations would be good, like corner drops on electronics.

It does boggles the mind that my cell phone has been held to a higher standard of abuse than the scopes put on service rifles.




As for some of the suggestions in this thread:

A pendulum hammer is stupid for testing what a rifle scope is most likely to see. If you want to know what happens when an item falls, you drop it. MIL-STD-810 has a pendulum hammer test and a drop test because they don't test the same thing.

Trying to measure exact forces is pointless, it doesn't matter. It is a pass/fail and if a heavy scope fails because of the additional force, well it still fails. If a scope with a giant objective and small turret passes because the objective strikes first, it still passes. Those factors are at play in the real world too. The entire design influences it, and trying to only test the erector system is not seeing the forest because of the trees.
 
Are you saying that doctors aren’t scientists?
I'm saying you, specifically, are not a scientist based on reading some papers over "libations". Just like I, as an actual scientist, can't take credit for every scientific discovery made by other scientists. Some doctors engage in scientific research (with varying degrees of rigor) but you have not said that you do. Let alone how that research is relevant to drop testing scopes.
If so, who has produced all the clinical data, randomized controlled trials, compiled the evidence that shapes current medical practices ie evidence based medicine? If not science, by what process?
Golly... you make me kind of confused as to why doctors ask me to analyze their data for them. Come to think of it, why do biomedical scientists like me exist at all if the doctors have all of this stuff handled? And why did a university give me a fancy piece of paper with Latin on it for authoring a bunch of papers with MDs who collected the data and had me do everything past that?
Again, while no one here will likely accept this, it isn’t about my ego
You sure about that?
 
What Mark 5 are talking about?
Sorry, failed memory on my part. Mark 4HD 2.5-10x42. It also didn't pass the full test if I remember correctly, only the initial drop portion.

 
I provided an example in the OP of a type of impact/zero hold test that could be measured (newtons of force), precise (point of impact), and easily repeatable from system to system. These are my suggestions toward, perhaps a different test, and IMO, one that removes variability/confounding.
What does this add to the test though? The scopes that pass the current eval pass the 3000 round, many mile USFS road truck ride test. The ones that "almost pass" sometimes get these extended evaluations as well and as far as I can recall never survive without issues. The results from the drop eval as it currently exists are nearly perfectly correlated with the results for longer term eval, and that anecdotally for many of us here is nearly perfectly correlated with our own experiences.

The reality is, as was said very eloquently by @Marbles, the drop test is more of a bare minimum standard. I use it to rule out scopes that won't function correctly. Not as a "it passed so it's amazing" high bar.

If you wanted to explore the upper limits of durability, maybe the more precise measurements would be useful (is there abuse that a NXS will survive but a Trijicon will not? Maybe, but I'm not jumping out of a plane with my rifle so either one probably exceeds my durability needs). To weed out the garbage, a threshold that all the good stuff passes and the poorly designed scopes will not is obviously pretty effective.

Consider also the unintended consequences... Does that carefully calibrated impact actually replicate the stresses a mounted scope undergoes when it lands? How do we verify that we're actually measuring what we hope we're measuring? I would take a test that prioritizes accuracy over precision any day. I don't care if the impact is exactly the same every time, I do care if it is actually putting the stress on a scope that it receives from accidental impacts in the field.

Another one of the statements I made in the OP was that this test has been referred to in many threads by many ppl as the “gold standard”, which, if we’re going to call anything a gold standard, we ought to improve the test and iron out variability.

Can you link to a few threads where people have called it that? I can't remember ever seeing it referred to that way. Certainly haven't seen what I consider "many" people in "many threads and I would say I follow the topic fairly closely.

Also, can you describe the variability in results that you've seen? Do we need to measure height for the drop to the nearest cm? Nearest mm? Nearest μm? Can you describe the improvement in results we would see from reducing that input variability by those amounts?
 
Rifles fall over or are dropped all of the time. I want my rifle to be able to handle that. Mine fell over pretty hard at an NRL, maybe I shouldn't have leaned it against that side by side like that but it happened. Part way thru day 1. I really wasnt worried about it (NX8 in um tikka rings). Placed 5th 🤷‍♂️. Zero was fine. A scope handling recoil is like bare minimum, pretty sure the drop tests are there to replicate drops too.
Of course they are. Evidently everyone except him knows that.
 
I, for one, dont want a scientific, repeatable test. I want some variables because I am using the scope in a variety of situations. And I can do the test myself
Thats not how it works. Unless you’re ok with saying the scope that got a MUCH harder impact and failed is “worse” than a scope that accidentally landed barrel-first, absorbing much of the impact, and therefore “accidentally” managed to pass, you ABSOLUTELY want an eval that is repeatable and consistent. Without consistency and repeatability there can be no comparison. I suspect what you dont want is a test that is so specific that it either doesnt adresss some modes of failure at all, or can be “gamed” by some cheap construction that allows it to pass a test without actually making it more durable in use. Those are very different things though.
Which brings up this:

So, talking from an industry standard perspective, standards being set to avoid cheating. Which brings up the point everyone ignores, an industry standard would be HARDER on the scope than the current drop test. All of the potential errors in the current test work in favor of the scope with the exception of shooter error if present (which could go either way).

Making a standard that doesnt allow cheating is HARD. For example, the UIAA test for climbing helmets specifies it must pass a penetration test to ensure that a fall onto a sharp object doesnt penetrate the helmet and enter your skull. They designed a contraption with a pointy end and specify the test apparatus, mass of the object, etc. So whats the first thing manufacturers do? They put a tiny square of impact-resistant material right where the point lands in the test…WITHOUT doing anything at all to protect you from an impact anywhere else. But it’s “penetration resistant” now. That is “gaming” a test without providing any benefit to the user. The exact same thing could EASILY happen with a standardized scope test. Ie the “impact arm” that hits the turret, or the contraption that only allows the impact to always hit one spot, allows (just for example) a rubber-armored turret cap to be used, with NO other modification. It might not change the reliability of the scope at all, but it could reliably pass the test. A standardised test might or might not be easier or harder, but what it would definitely be is more predictable. And that certainly allows it to miss some things, as well as allows it to be “gamed”.

I say, be careful what you wish for if you’re looking for a perfect test. Imo the perfect test doesnt exist.

(Eta: Im not suggesting the evals are totally consistent, or that they are inconsistent. They DO have opportunities for inconsistency, which for certain invites criticism. But its not clear to me that the results would be different if those inconsistencies were addressed, and it’s entirely possible that in doing so other problems are created within the test that allow for a higher % of false passes).
 
To be fair...

Any RF device has a zero, the relationship of the reticle to the laser. So yes, a RF device can lose zero, and in fact, most of them seem to come improperly zero'd...like my $3650 leica geovid pro ab+'s 🤦
Yep, once again, everyone except him knows that.
 
Back
Top