Questioning the "gold Standard Drop Test" and the conclusions of "This scope brand does/doesn't hold zero"

That's about all I've got for this month's Journal Club.
I don't know surgeons could actually interpret the literature...😉

Sorry man, but the emergency medicine physician in me just had to throw a jab. I've had the same thought, but this kind of third party testing would be exceedingly hard to come by. It'd be interesting to see the manufacturers post their own results and have an industry standard test across the board.
 
I’m not suggesting that scopes that don’t hold zero aren’t still capable of putting bullets generally where a guy aims, well enough to kill critters sometimes.

That’s another fair assessment of all of this - the argument can be made that most people can’t shoot well enough to ever put the difference of a reliable to scope to use.

But because you’ve seen a bunch of people kill animals with scopes doesn’t mean those scopes reliably hold zero in field use. It might mean that they did well enough to equate to success across a small sample size(though impressive sample size.)

But Is is really controversial to look at most of the scopes failing field evaluations, and extrapolate onto the broader industry?
With regard to bolded text above, I agree most dont shoot with enough precision to know whether it's a scope problem, or shooter problem. Blaming the gear is the easy button.

And it doesn't mean that their scope doesn't hold zero either. Your point of view is the typical RS mentality on the subject that I wholeheartedly disagree with.

Nope, doesn't work that way. Small sample sizes may/may not be indicative of a larger sample no matter the product.
 
I only read the first couple of pages but holy cow...attacking a the poster's profession and integrity over a legitimate question!? Some of you need to get a life and put down the Kool-aid. Anyone testing something should be prepared to defend their methods. Form does and appears to handle it well. I think he understands that his methods are being questioned, not the man. I agree the tests have some significance but are not the definitive conclusion, which I believe is virtually impossible without Elon Musk money. His rabid disciples are a bit over the top though. There is a group that believes these tests are gospel and go on every website, YT comments section and proclaim their allegiance to NF, SWFA or whatever brand.

I do want to thank Form for doing the tests and those who donate a scope to test. His work is appreciated.
 
With regard to bolded text above, I agree most dont shoot with enough precision to know whether it's a scope problem, or shooter problem. Blaming the gear is the easy button.

And it doesn't mean that their scope doesn't hold zero either. Your point of view is the typical RS mentality on the subject that I wholeheartedly disagree with.

Nope, doesn't work that way. Small sample sizes may/may not be indicative of a larger sample no matter the product.

I think we agree on far more than we disagree on.

I don’t think my perspective is typical of folks here but thats my opinion.

One nit ill pick is with regards to small sample sizes always having the same statistical significance.

Do you assume that scopes holding zero would fall along a normal distribution?(a bell curve) - if you do, your point about small sample sizes makes sense.

I don’t think there’s a normal distribution of scopes holding zero in field use. And because of that, when the first scope of a brand and model fails a field evaluation of this sort, it is really useful information. And has a lot of predictive validity.
 
Some pretty rough responses here for a legitimate question. Ultimately there should be a defensible reason or reasons for testing performed, gear or otherwise. Instead of getting offended, offer up a good counter argument.

We can’t drop every scope ever made to build a complete data picture. So the N question is a good one. We are accepting that there will be data outliers.

But, enough scopes have been dropped in a fairly consistent manner to make reasonable conclusions about zero shift. Based on the current data, I would decide to buy a SWFA, NF, Trijicon, or Maven if zero retention was important to me. For the same reason, I would not choose to buy other commercial optics.

Will there be a SWFA, NF, Trijicon, Maven that has zero shift from a drop/bump/impact someday? Probably…but most scopes from that category of manufacture have demonstrated good zero retention across an extensive data set, which works for me.

And for the one scope I’m aware of that was specifically designed around the drop test, it’s been absolutely solid for zero retention. Weird.
 
People are so sure they won’t get shocked (their setups hold zero) that they’ll sit around all day and pontificate about it. When it comes right down to it they won’t stick their finger in. They are happy to encourage others that it’s ok to stick your finger in, no consequences (buy crap scopes).

Someone else says “hey, that’s dumb and there are consequences” and they are demonstrable, but no… let’s sit around and argue about how that can’t be correct..

Test your own theories instead of telling people who have done it they are wrong for the way they did it.

That help?
Not really.

What is the purpose of this forum if not for us to learn from others without having to repeat everything?
 
What is the purpose of this forum if not for us to learn from others without having to repeat everything?

Agreed.

However, you can easily do it yourself. The root origin of all this was a guy intentionally dropping his rifle and seeing what happened, then attempting to explain to the world why fudd logic was flawed.

That whole "intentionally drop your rifle" part is where the major fundamental shift in mindset comes into play. Get in that mindset and all of a sudden you don't really need Form for anything, which IMO is his greatest contribution around here. The data is great, but for me the shift in mindset to actually testing my own stuff is where the real value is, at least for me.

At work, we always try to break things before we release them out into the wild. Why was I basically doing the complete opposite with my rifles?
 
Not really.

What is the purpose of this forum if not for us to learn from others without having to repeat everything?

I think we are saying the same thing? You can either trust the drop test and choose products that are less likely to fail. Or if you don’t trust them try it yourself and see. 🤷
 
I see what you are saying.

I do think bitching and moaning should be called out for sure.

Constructive criticism can be beneficial. Posting data from tests online deserve to be looked at with a critical eye imo. No one is above learning.
I think we are saying the same thing? You can either trust the drop test and choose products that are less likely to fail. Or if you don’t trust them try it yourself and see. 🤷
 
I didnt care about the drop tests but did like the bouncing around in back of truck test. Have had some wandering scopes before and never could pinpoint the cause of wandering x hairs.

I been taking rifle w me on trips to our place and it has fallen 2x and got hit by a tractor bucket. So much for “nothing happens to my gun”. ;) 1 fall was about 30 inches off hotel luggage cart when strap on case failed.
 
It would be nice if we could have this discussion without loaded terms like “gospel” and “gold standard”. Such things do not exist. Having a way to evaluate optics is a good thing. Expecting it to be perfect is just flawed logic. Large sample sizes are great if you can get them, but for most things that is just not an option. The real benefit of this drop testing experiment is that it’s getting some of the manufacturers to take notice and value durability and zero retention.
 
The last page or two has accelerated the discussion to thoughtful responses from several angles that I imagined/hoped for when starting the thread.

With a repeatable controlled test, one can compare that controlled data to random. You can still use the drop test, and perhaps even validate it, or find differences in outcomes and explore those as weak points for improvements.

I have in my mind a drop jig that I hang from my garage rafters onto my rifle bench vise, that would cost me $0 to build, deliver measurable, relatively precise blows, that anyone could take the same recipe and reproduce to exacting specifications down to the rifle system, the same test. I know, the force factors would vary from a drop test, hence, they are entirely different tests. We could examine/discover trends, ie: the weak point tends to be the windage turret, or this optic tends to lose zero at X newtons of impact (equivalent to a a fall onto a rock from X height….)

The “millions of dollars” claim is pretty outlandish. The biggest thing is the access to optics and time. There is an opportunity here as well. There is value in this data.

The other opportunity here is, to learn, improve processes, and obtain more, not necessarily better, data. Data I am currently missing when trying to navigate critical gear decisions. That is the intent of the thread, to think/discuss the nitty gritty nuances of our hobbies and find enjoyment in the process.
 
Although the drop test is perhaps not up to statistical standards I think it is a representation of a scope durability.
Leupold scopes as a group have mostly failed the drop test.
Nightforce and Trigicon as a group have mostly passed the test.

I personally have owned perhaps 30 Leupold scopes in my life. I have noticed a drop in quality in the last 5 to 10 years. The CDS system is just not that reliable and repeatable.

After reading the Rokslide reviews on scope durability I am converting my scope selection to Nightforce and Trigicon lines.

After 50 plus years of mountain hunting, I have experienced falls and horse wrecks…. I simply do not want to waste a precious big game tag on any piece of equipment that is not durable when I know better options are available.

Until we have manufactures developing real life repeatable durability tests I am going with the not perfect Rokslide scope tests.
 
The last page or two has accelerated the discussion to thoughtful responses from several angles that I imagined/hoped for when starting the thread.

With a repeatable controlled test, one can compare that controlled data to random. You can still use the drop test, and perhaps even validate it, or find differences in outcomes and explore those as weak points for improvements.

I have in my mind a drop jig that I hang from my garage rafters onto my rifle bench vise, that would cost me $0 to build, deliver measurable, relatively precise blows, that anyone could take the same recipe and reproduce to exacting specifications down to the rifle system, the same test. I know, the force factors would vary from a drop test, hence, they are entirely different tests. We could examine/discover trends, ie: the weak point tends to be the windage turret, or this optic tends to lose zero at X newtons of impact (equivalent to a a fall onto a rock from X height….)

The “millions of dollars” claim is pretty outlandish. The biggest thing is the access to optics and time. There is an opportunity here as well. There is value in this data.

The other opportunity here is, to learn, improve processes, and obtain more, not necessarily better, data. Data I am currently missing when trying to navigate critical gear decisions. That is the intent of the thread, to think/discuss the nitty gritty nuances of our hobbies and find enjoyment in the process.

Not saying you haven’t, but if you haven’t, it may help to read the first four posts of the field eval explanation and standards thread. There’s more to the eval than the drop test (namely tracking, RTZ, and zero retention over many rounds fired and miles rattled around in various vehicles, all while mounted on proven rifles and firing proven ammo,) not to mention the “why,” the “history,” and the how to take the results” explanations.

Are you gonna repeat all that and/or improve upon it, too? Great if so, but I wouldn’t have time to. Or is your goal just to make impact portion of the drop test more to your liking? That’s fine too, but keep in mind that drop/zero retention failures don’t necessarily mean external (or internal) damage caused by a blow to a certain spot on the scope. Those failures usually just reveal that the guts are weak and they shifted internally when the rifle system suddenly stopped moving after falling 36”, leaving no permanent damage. Just more food for thought.

Lastly I’ll say that if you are gonna create a new standardized pendulum test, it might make more sense to hang your apparatus from the joists instead of the rafters.
 
FYI:

Drop jigs and fixtures, including pendulum, have already been done with limited value. You should be able to find examples that are public information. Correlation to actual use can be extremely difficult.

Even electrodynamic shakers can be limited in usefulness without the right resources. This may ring a bell with anyone who has followed some manufacturers' infant mortality issues in the field.

If you really want to make a science project out of it, you'll need to fund competent people and extremely expensive equipment. You can't simply buy stuff and expect to get anything meaningful without expertise in test design, test execution, data collection, data processing, and reporting.

I have clients in various industries with large budgets, and they struggle with this sort of validation testing.

Most people would be better off doing their own evaluation, with their own rifles. There seems to be a lot of reliance on what does well with the Rokslide procedure, but that doesn't mean a scope that passes will actually function with your mounts or rifle.
 
Back
Top