Leupold catching on? Impact testing? Huh?

Dobermann

WKR
Joined
Sep 17, 2016
Messages
1,942
Location
EnZed
Does anyone have a link to what Koshkin said about these drop tests, etc? I think he's an opical engineer if I'm not mistaken. I find them interesting personally but not something I'd "test" personally.
Not sure there's much to be gained going down that path, to be honest ... I've read and listened to some of his critiques of the process (some of it is only a sentence or two, buried in 1-2 hour rambling videos, so it would take you dozens, if not hundreds of hours to track it down), and he made a bunch of unfounded and unhelpful claims ... it didn't really advance anything.

There's also been some threads on the Hide where the process has been misrepresented, and/or poorly understood, as well as a couple of people trying to cast aspersions on Form, even though they'd never interacted with him, and seemed to just be annoyed by the tests and wanted to then somehow make it personal. Yawn.

Whenever 'the new hotness' scope is announced on the Hide, I genuinely ask if anyone has drop-tested it yet. I don't think anyone has ever said they have ... responses usually amount to 'Well, the factory impact tests them', or 'If the military selected them, they must be good'.

The more daring souls list the various scopes they've seen go down in the field; Burris, Leupold, and Vortex seem to feature when this is actually mentioned, but this sometimes results in a dogpile of others saying how good those brands are.

Heck, even Frank and Mark published some stats on simple zero retention they'd seen at classes, and then had to remove some of that due to backlash. (For what it's worth, their data on Bushnell's various Elite Tactical lines and Nightforce showed good results, and they have mentioned using SWFAs as backup/loaner scopes because they're reliable; Leupold recorded at least one catastrophic failure.)
 
Last edited:
Joined
Jun 27, 2022
Messages
1,264
You mean optic engineers that shoot less rounds in a year than some will shoot this week? Or one that makes statements about how great scopes are with a couple hundred rounds from a bench?


I will bet you $5,000 that we can go to any outdoor store and you can buy a NF NXS, I’ll mount it on my rifle, zero it, you can drop it per the stated eval, I’ll shoot it, and it will hold zero.

On your flawed test that a SWFA fixed power somehow passed? I knocked one of those things over one time and the elevation knob took the brunt of it and bent the turret enough that it was binding. It wasn't anywhere near the height of your drop test.
 

Helislacker

Lil-Rokslider
Joined
Jan 14, 2022
Messages
109
Manufacturers use collimation tables. Scientific equipment that completely eliminates shooter error.
Except they don’t replicate actually dropping the rifle or induce the same zero shifts that would be possible from even small drops, which are bound to happen from time to time if you carry a rifle long enough. I see you’re always on here pushing back against the drop tests and their results. This is despite Form offering you the chance to come out and test it, with the opportunity to get a brand new optic out of the deal. Why wouldn’t you take him up on that? Well it’s because deep down you know he’s probably right.

I also assume it’s because you have an emotional attachment to some optics that don’t typically fare so well, and aren’t willing to sell them to start fresh. All I can say to that is, don’t develop an emotional attachment to tools.
 

ELKhunter60

Lil-Rokslider
Joined
Aug 26, 2018
Messages
230
Location
Sparta. Michigan
You mean optic engineers that shoot less rounds in a year than some will shoot this week? Or one that makes statements about how great scopes are with a couple hundred rounds from a bench?


I will bet you $5,000 that we can go to any outdoor store and you can buy a NF NXS, I’ll mount it on my rifle, zero it, you can drop it per the stated eval, I’ll shoot it, and it will hold zero.
Don't let the guys get to you. I really liked your test and appreciate what you put into it. It's real world applications with results that do show trends worth considering when purchasing a rifle scope (my opinion).

I'm an engineer for a major car manufacturing company. I manage part quality concerns and warranty issues. We have drop test requirements on some parts. I have found cases where the "engineering specification" for the drop test can be met, but when technicians at the plant drop the same part on the floor accidently before installing it into the vehicle, it breaks. One case I can think of is we require the part to still work after being dropped from (I'll give you the English conversion because everything in automotive is metric) 3 feet. The way the part is oriented before it's dropped is not specified. The supplier of the part has data showing the part passes. I did the test and it failed. When I approached our engineer and the supplier about this, they told me I wasn't orientating the part correctly. Where was the part orientation specified? Nowhere. They had just figured out how to make the part pass our specification by holding it a certain way.

If folks don't want to use your test method when evaluating which scope they buy, that's ok. If they want to tell you your method was stupid. That's ok too. I looked at your report yesterday for the first time and I've been rethinking my next purchase because of it.
 

prm

WKR
Joined
Mar 31, 2017
Messages
2,253
Location
No. VA
In the military we do two basic types of testing:

(1) developmental testing where engineers see if the weapon system meets all the specs

(2) operational testing where the actual user in an operational environment sees if it’s suitable (doesn’t break where he uses it), effective (gets the job done), sustainable (can be kept working). This also includes live fire test and evaluation to confirm lethality (disables/disrupts enemy capability) and survivability (doesn’t get disabled/disrupted)

It’s totally conceivable that we develop something that meets all specs but the operator rejects as useless.

Perhaps the optics companies are doing DT&E but not OT&E
I would modify that to state that optics companies do not have a requirement for side impacts and thus it is never tested in DT or OT. I actually think many optics companies do not do any sort of OT.
Trijicon does list DT considerations that include impacts and zero retention. Though it is not clear that zero retention after drops is part of their testing.
 

Sled

WKR
Joined
Jun 11, 2018
Messages
2,265
Location
Utah
The Leupold statement tells us that they are testing zero retention. 5000 impacts at a g force of three times that of a 308 Winchester Recoil. Full quality control and quality assurance before and after.

leupold is telling us that they are using industry standard quality processes to provide a high quality process.

this is simply how things are done in very professional engineering organizations

That's the same test I do with my buddies beer from time to time.


Fwiw, Leupold is giving their scope the Sherwin Williams treatment and just making sure it still has a reticle and still looks like a scope on the outside.
 

Deywalker

FNG
Joined
Sep 18, 2021
Messages
84
On your flawed test that a SWFA fixed power somehow passed? I knocked one of those things over one time and the elevation knob took the brunt of it and bent the turret enough that it was binding. It wasn't anywhere near the height of your drop test.
You must be gifted at breaking scopes, I dropped mine onto a very thin midway mat on top of rocks from waist height about 6 times. The most physical damage It had was a few Knicks from breaking through the mat.
 

Attachments

  • PXL_20221116_012205094.jpg
    PXL_20221116_012205094.jpg
    193.6 KB · Views: 50

Reburn

Mayhem Contributor
Joined
Feb 10, 2019
Messages
3,439
Location
Central Texas
On your flawed test that a SWFA fixed power somehow passed? I knocked one of those things over one time and the elevation knob took the brunt of it and bent the turret enough that it was binding. It wasn't anywhere near the height of your drop test.

You keep calling the test flawed.
What are your optics tests that you are doing for hunters?
You keep refrencing manufactors. Is that you? Are you in that video? or are you just drinking their kool Aide?
What does it matter if it passes the "manufactor tests" but fails for me in the field.
Are you staying anyone that accidently drops their scope from waist high has damaged it and must stop their hunt as the scope will no longer function?
If that is the case why do some scopes pass?
 

Macintosh

WKR
Joined
Feb 17, 2018
Messages
2,774
I have no doubt leupold and many other manufacturers do scientific testing— but if they aren’t testing for the specific things that I care about (i.e. zero retention under field use), that test has zero functional use to me. Can someone point me to where in that 49 minute video they talk about how they test zero retention caused by those impacts (or shaking or whatever they do to simulate abuse), tracking in combination with impact, etc? Because the machine shown at the beginning wasn’t even on a rifle, so how is it testing zero retention? Is there another section where they show this? Because 100% of my Leupold scopes have not held zero between range trips, across three examples (and I didnt even drop any of them) so I am certainly not giving Leupold the benefit of the doubt here. If they take that scope and mount it on a rifle AFTER abusing it, then they are testing if it CAN be re-zeroed, not testing zero retention. honestly I wouldn’t give night force or Trijicon or SWFA the benefit of the doubt either. Because all of them are trying to sell me something, and because I DONT NEED TO take their word for it, I have other options.

The criticism of forms tests as not being scientific is perfectly valid. It’s not a scientific test with only one scope tested, and without some apparatus to make the drops quantifiably repeatable. So what? As a quick and dirty way to approximate something like a scientific test that compresses a couple years of use into an hour, and actually test the zero retention and functionality of the scopes in abusive conditions, show me what other option people have to compare reliability between scopes? Is there ANY other option? Frankly, without another third-party test that actually focuses on testing zero-retention under abuse, there is NO OTHER OPTION to get a relative sense of reliability between scopes aside from this. If there is, please share, I havent seen it.

Without any other option, the burden of showing a better solution is not on Form, it’s on those who are dismissing what he’s doing.

My other rant is that it seems to me many people are taking the wrong info from these tests. If people think these drop tests are too harsh and failing scopes that would otherwise be good, that’s a perfectly valid hypothesis …without some big scientific test to correlate the drop test to real world use and real world failures, it’s all conjecture to say where the line SHOULD be between pass and fail and what level of abuse is required during a test to identify that. Currently we are relying on Forms experience (which none of us really know anything about other than that it seems extensive), and the people that have had personal failures that correlate to the testing (ie me and others). However I don’t think thats whats being shown. If you want to differentiate between products, the test has to be able to tell the difference between those products. If 80% of the scopes pass, then it does not tell you which scopes are the most reliable. If you want to know which scopes are the most reliable, then you have to fail most of the scopes. Regardless of whether this test is realistic, at least from the point of view of impact, it is differentiating between the rest of the pack and a few top performers. So as I see it, it is irrelevant if this test is harsher than it needs to be to simulate any one person’s real use, the point of this test is to identify the “cream of the crop”, the most reliable products, and in order to do that the test has to be harsh-enough to see that difference, and ID the top few percent that are the most durable. So if someone says my Leupold or Vortx or whatever scope has been durable for me, and it failed forms drop tests, that’s perfectly reasonable. It does not invalidate the test from saying that these scopes that passed are the MOST durable (from impacts) and are MORE durable (from impacts) than the ones that failed. But it is reasonable, without info, to question to what degree a failure should cause someone NOT to buy a certain scope. And for a company that manufactures rifle scopes, many of which are designed to address the needs of very average use, it’s perfectly reasonable to say that a test like this doesn’t accomplish what they need. And just as reasonable for people who have had problems with those scopes to look for a path to identify products that will more reliably meet their more rigorous needs in a way that is not currently being provided.
I’d love to sit down with an engineering class or something, and develop truly scientific testing that would differentiate between average and above average, but also identify the top, say, 10%, and possibly incorporate both impact and vibration or some other mode of abuse that might(?) result in a very different outcome, and attempt to correlate to actual field failures. It seems to me that would be the most useful for consumers, and identifying “OK this scope is more reliable than about half the scopes on the market, and then these other couple scopes are what I want to REALLY be able to truly rely on it in any conditions”. It would take a lot of work and testing to arrive at a good test procedure like this. So who teaches a college engineering course that could utilize their students work to put this together for us?
 
Last edited:

Reburn

Mayhem Contributor
Joined
Feb 10, 2019
Messages
3,439
Location
Central Texas
without some big scientific test to correlate the drop test to real world use and real world failures, it’s all conjecture to say where the line SHOULD be between pass and fail and what level of abuse is required during a test to identify that.

Your post was great. Thank you.

One but though. The line is what the test is. 3' drops while holding zero and continuing to function normally.

The drop heights and the randomness of the drop while less scientific are extremely valid in my opinion.
It does appear that efforts have been made to "standerdize" how the rifle is dropped on the mat.
as it happens my knuckles curl at 32" so a waist drop being 3 feet is exactly where it would be on me.
18" being dumped over on its bipod.
The only drop he doesn’t do is stand the rifle up on its but and let if fall over simulating falling over while leaned up against a tree or wall.
 
Last edited:

Macintosh

WKR
Joined
Feb 17, 2018
Messages
2,774
@Reburn I dont disagree with that ^^. My main point was to address many of the people who are dismissing the testing being done because they have not seen personal failures with scopes that have failed. They are saying that the test is irrelevant because they don’t personally drop their stuff. I am saying that if you want to figure out what the most durable products are, you have to make the test harsh enough to see which ones rise to the top. And that leaves everyone in the middle wondering how to apply it to their scope purchase. It would be perfectly reasonable if you had a real scientific version of this test to use an “A” rating and a “B” rating, where the B rating applies to those scopes that are above average (maybe that means they pass the 18 inch drops but not the 36 inch drops), and then the A rating applied to those very few products that pass the most rigorous part of the test. As an industry-wide test, something like this might make sense and allow better transparency for most people, while for someone’s personally-developed test who clearly has very high standards for durability, it makes much more sense to focus only on the products that will meet their particular needs. Ultimately what I’m saying is that there are valid criticisms of this testing, but at the same time those criticisms dont invalidate what it DOES tell us, and people really have no other choice, they just have to use the testing in context for what it is and understand what it can and cannot tell them, or come up with a better test.
 
Last edited:

mtnwrunner

Super Moderator
Staff member
Shoot2HuntU
Joined
Oct 2, 2012
Messages
4,117
Location
Lowman, Idaho
Don't let the guys get to you. I really liked your test and appreciate what you put into it. It's real world applications with results that do show trends worth considering when purchasing a rifle scope (my opinion).

I'm an engineer for a major car manufacturing company. I manage part quality concerns and warranty issues. We have drop test requirements on some parts. I have found cases where the "engineering specification" for the drop test can be met, but when technicians at the plant drop the same part on the floor accidently before installing it into the vehicle, it breaks. One case I can think of is we require the part to still work after being dropped from (I'll give you the English conversion because everything in automotive is metric) 3 feet. The way the part is oriented before it's dropped is not specified. The supplier of the part has data showing the part passes. I did the test and it failed. When I approached our engineer and the supplier about this, they told me I wasn't orientating the part correctly. Where was the part orientation specified? Nowhere. They had just figured out how to make the part pass our specification by holding it a certain way.

If folks don't want to use your test method when evaluating which scope they buy, that's ok. If they want to tell you your method was stupid. That's ok too. I looked at your report yesterday for the first time and I've been rethinking my next purchase because of it.
I think you should quit your dayjob and design riflescopes!
I know a "consultant" that I would highly recommend. He's kinda backwoods though........

Randy
 

Reburn

Mayhem Contributor
Joined
Feb 10, 2019
Messages
3,439
Location
Central Texas
@Reburn I dont disagree with that ^^. My main point was to address many of the people who are dismissing the testing being done because they have not seen personal failures with scopes that have failed. They are saying that the test is irrelevant because they don’t personally drop their stuff. I am saying that if you want to figure out what the most durable products are, you have to make the test harsh enough to see which ones rise to the top. And that leaves everyone in the middle wondering how to apply it to their scope purchase. It would be perfectly reasonable if you had a real scientific version of this test to use an “A” rating and a “B” rating, where the B rating applies to those scopes that are above average (maybe that means they pass the 18 inch drops but not the 36 inch drops), and then the A rating applied to those very few products that pass the most rigorous part of the test. As an industry-wide test, something like this might make sense and allow better transparency for most people, while for someone’s personally-developed test who clearly has very high standards for durability, it makes much more sense to focus only on the products that will meet their particular needs. Ultimately what I’m saying is that there are valid criticisms of this testing, but at the same time those criticisms dont invalidate what it DOES tell us, and people really have no other choice, they just have to use the testing in context for what it is and understand what it can and cannot tell them, or come up with a better test.

post #5

I had to go back and reread what form had written. He and Ryan basically addressed this when starting the 18" drops being normal use vs the 36" drops.
 
Joined
Jan 5, 2022
Messages
746
FWIW, Leupold has been using their impact testing machine or "punisher" as a marketing tool for better than decade. At least 10 years ago or more I had a talk with them about a 3.5-10 that went wonky, and they brought it up several times as if to placate me after the failures I'd experienced. I've read several print and internet articles wherein the author was a thinly veiled Leupold shill, describing the virtues of Leupold scopes and their scope shaking/impact machine. I have yet to read about the functionality testing component of the process; my understanding is that anything less than catastrophic failure is a pass.

Maybe we can get those 2 dandies Leupold had do a Q&A over on 24 HCF to do something similar on RS. It may sound a bit hyperbolic, but it truly was one of the most stupifying things I've seen on the internet. "Tap the turrets after adjustment" and "turn past and come back when dialing" are just a couple of the quotes I recall that made sure my money will be going elsewhere until they acknowledge they have issues and make substantive changes.
 
Top