Rifle scopes you'd love to see Form test

Don't see the need to. I kill 30 deer a year, about 75 hogs, and a few other things every year. My gear bounces around in UTV's, ATV's, and trucks 150 days a year. I've got a pretty good idea what works.
What's good enough for you, is good enough for you.
 
Don't see the need to. I kill 30 deer a year, about 75 hogs, and a few other things every year. My gear bounces around in UTV's, ATV's, and trucks 150 days a year. I've got a pretty good idea what works.
Then it sounds like you have nothing to gain by reading any more on drop tests. If you have it all figured out, you can leave the rest of us to our groveling about products that don't do one of the only things they are supposed to do.
 
Then it sounds like you have nothing to gain by reading any more on drop tests. If you have it all figured out, you can leave the rest of us to our groveling about products that don't do one of the only things they are supposed to do.
I don't read them now. I asked Ryan Avery about them based on something he posted about NF and Trijicon, that's it. It got blown way out of proportion obviously.
 
The biggest problem I see with your argument is that large numbers of hunters use failed RS drop test scopes successfully every year, they've held zero, killed loads of game, and have for years, hence the skepticism. 99% of hunters any/everywhere have never heard of RS or the RS drop test anyway.
When it comes to logical arguments, there's a difference between the evidence supporting the possibility of an outcome and the high probability and reliability of that outcome. When users either have light-duty demands of the scope, or make excuses for it and continue to make small adjustments every year while claiming that the scope works as intended, it does not say anything about the reliability of the scope working correctly in difficult conditions and with no excuses.

A good analogy would be vehicle reliability. As a hypothetical, if specific testing showed that a Toyota truck was more reliable and could take more abuse without breaking than a Ford truck, indicating that Toyota trucks may be more reliable and durable in general, that result would not be invalidated by the fact that millions of people drive Ford trucks to get to work and move furniture every day, doing repairs more often but accepting that as normal. In addition to the reliability and fewer repairs, for harder use, like off-roading, that's when the durability advantage of the Toyota truck may be evident. For daily grocery trips, differentiating between the durability of Ford and Toyota trucks may be more difficult, and that's how most people use their trucks, but that doesn't mean that the Toyota truck isn't generally more reliable and durable (again, hypothetically).
 
Can you provide links, for both?

Another scope manufacturer was throwing scopes off of the roof of their HQ or adjacent building. But that petered out for reasons that I am not privy to.

That stated, throwing a scope on the ground doesn't tell us much. Unless it's an obvious fail.

When I was doing product dev, we didn't have a drop requirement, but did it anyway with production representative models. This was considered severe abuse, but we used it for advertising. And customers expected it, but it was simply due to over-engineered products setting an unrealistic expectation in the past.

I left that industry, but now have clients that do drop tests for Hi-Rel and MIL-STD. I make sure that they don't make unfounded claims. You'd be surprised how many don't understand the complexity of a drop test.
While it's not drop-testing per se, in addition to the video above, there are multiple videos of NF staff banging scopes with significant lateral shock, and then testing on a collimator ... I think they've also done this live at various outdoors shows.
 
I went and picked a fight on SH, it's going as expected. Asked for actual reasoned, evidence based critique of the drop tests and the straw men and ad homenims came out of the woodwork. I was actually hoping for real discussion, but I'm apparently the most naive person on the internet. It's turning into a dumpster fire, but if anyone is interested in further exploring the "expert" opinions refuting the evals it's all there 🤣
If you hunt hard enough, you can find the Everyday Sniper podcast where Frank discusses how he and Marc used to test scopes and post the results ... and why he stopped doing that. He was pretty up-front about how it annoyed scope companies who paid for advertising on the Hide.

And to be clear: not throwing shade at Frank here, but the opposite - he was quite transparent that some scopes struggled (from memory of the spreadsheet, some Vortex had issues and Leupold had catastrophic failures). Not surprisingly, the NF and Bushnell Elites also did well. I've also quoted him here years ago saying that he kept SWFA 5-25s as back-up scopes for classes for when other more expensive scopes failed.

In other words: back when he was making his data public, there were a lot of similarities with Form's data. Main difference was that Frank and Marc were testing scopes out of the box and during class use, and not doing drop tests ... so there was some variance in results, but the headline outcomes were very similar.

And a lot of Frank's hard data back then did not square with the fanboyism (of Leupold and Vortex especially) that some on the Hide were very vocal about ... if you hunt hard on the Hide, you'll see him even call some people out about it ...
 
When it comes to logical arguments, there's a difference between the evidence supporting the possibility of an outcome and the high probability and reliability of that outcome. When users either have light-duty demands of the scope, or make excuses for it and continue to make small adjustments every year while claiming that the scope works as intended, it does not say anything about the reliability of the scope working correctly in difficult conditions and with no excuses.

A good analogy would be vehicle reliability. As a hypothetical, if specific testing showed that a Toyota truck was more reliable and could take more abuse without breaking than a Ford truck, indicating that Toyota trucks may be more reliable and durable in general, that result would not be invalidated by the fact that millions of people drive Ford trucks to get to work and move furniture every day, doing repairs more often but accepting that as normal. In addition to the reliability and fewer repairs, for harder use, like off-roading, that's when the durability advantage of the Toyota truck may be evident. For daily grocery trips, differentiating between the durability of Ford and Toyota trucks may be more difficult, and that's how most people use their trucks, but that doesn't mean that the Toyota truck isn't generally more reliable and durable (again, hypothetically).
I've heard that argument ad nauseum on a couple of different forums including yours on 24HCF. Fact is, there are probably many, many hunters out there who's gear works exactly like they say it does. Who's to say any different?

This isn't hypothetical.......we had fleets of 1/2t pickups, sometimes 75 at a time, at work in the oil fields of the Permian Basin. We had many samples of Z71's and F150's that got well over 250k miles with proper maintenance. The Tundras of the early 2000's were the best one's they've ever made and we had several of those go 300k+. The generations after that were no better than the Fords or Chevys.
 
If you hunt hard enough, you can find the Everyday Sniper podcast where Frank discusses how he and Marc used to test scopes and post the results ... and why he stopped doing that. He was pretty up-front about how it annoyed scope companies who paid for advertising on the Hide.

And to be clear: not throwing shade at Frank here, but the opposite - he was quite transparent that some scopes struggled (from memory of the spreadsheet, some Vortex had issues and Leupold had catastrophic failures). Not surprisingly, the NF and Bushnell Elites also did well. I've also quoted him here years ago saying that he kept SWFA 5-25s as back-up scopes for classes for when other more expensive scopes failed.

In other words: back when he was making his data public, there were a lot of similarities with Form's data. Main difference was that Frank and Marc were testing scopes out of the box and during class use, and not doing drop tests ... so there was some variance in results, but the headline outcomes were very similar.

And a lot of Frank's hard data back then did not square with the fanboyism (of Leupold and Vortex especially) that some on the Hide were very vocal about ... if you hunt hard on the Hide, you'll see him even call some people out about it ...
If people could be honest and objective about it, I've never once doubted the RS drop test results in that the scopes that "pass" are very tough and durable. Never said otherwise. In fact, there's a fellow up in the TX Panhandle next to where I've hunted for 25 years, last name Hodnett (often regarded as the best trainer in the world). I've had the occasion to watch him and his "students" from all over the world many, many times and even had some come out to see how far they could whack prarie dogs. The vast, vast majority of scopes used were NF of some sort, a scattering of S&B's, and a scattering of Bushy Tacticals. Very few other brands were ever present. Hodnett is not active as much today as he was, so I haven't been over there the past couple of years, as he's turned it over to his son. I doubt much has changed though.

Another fellow down the road who knows a thing or two about shooting is the most decorated LR competition shooter in history, last name Tubb. He was using some sort of slightly modified Leupy last time I was over there. Go figure.
 
I've heard that argument ad nauseum on a couple of different forums including yours on 24HCF. Fact is, there are probably many, many hunters out there who's gear works exactly like they say it does. Who's to say any different?

This isn't hypothetical.......we had fleets of 1/2t pickups, sometimes 75 at a time, at work in the oil fields of the Permian Basin. We had many samples of Z71's and F150's that got well over 250k miles with proper maintenance. The Tundras of the early 2000's were the best one's they've ever made and we had several of those go 300k+. The generations after that were no better than the Fords or Chevys.
JG,

I have also heard the same argument from you multiple times, but it doesn't change the fact that the limitations of one use profile (either in durability requirements or simply awareness of performance) don't invalidate test results that may apply to a different use profile. These drop tests are also not being claimed to be universally applicable to a given brand or model of scope, as they are simply samples of one and two.

It was a hypothetical example, as I wasn't making a claim about trucks, just using them to illustrate the point about use profiles and testing results.
 
It was a question.

I haven't seen either company publicly drop a scope. But take Trijicon, for instance. "Drop Tested" is in their marketing... an educated guess would be that they have dropped a few.

It's like baking a cake. You pick the ingredient. Some optics companies prioritize durability as their first ingredient, while others don't.

View attachment 936016

Understood, Ryan. I thought you had vids from those manufacturers. That was just my misunderstanding.

IIRC, Trijicon doesn't release specific details on their V/V testing.
 
While it's not drop-testing per se, in addition to the video above, there are multiple videos of NF staff banging scopes with significant lateral shock, and then testing on a collimator ... I think they've also done this live at various outdoors shows.

Thanks - I've seen those NF whack-a-mole videos!

The problem with whack-a-mole is that it appears impressive to the casual observer, but it doesn't necessarily correlate with real world use.

There used to be videos on Youtube, presumably from some SE Asia factory, showing the same whack-a-mole test with collimator. Done by a tech outdoors, in flip-flops, with dirt floor. They held "zero" just fine.
 
I've heard that argument ad nauseum on a couple of different forums including yours on 24HCF. Fact is, there are probably many, many hunters out there who's gear works exactly like they say it does. Who's to say any different?

JG,

I have also heard the same argument from you multiple times, but it doesn't change the fact that the limitations of one use profile (either in durability requirements or simply awareness of performance) don't invalidate test results that may apply to a different use profile. These drop tests are also not being claimed to be universally applicable to a given brand or model of scope, as they are simply samples of one and two.

It was a hypothetical example, as I wasn't making a claim about trucks, just using them to illustrate the point about use profiles and testing results.

Jordan, JGR,

I think that you guys are actually on the same page, IF you think back to some of the old drop testing discussions at 24HrCF from 10+ years ago. Those drop "tests" were obviously well before it was thing here at Rokslide, and the criteria here are different since then. Do you recall those old threads at 24HrCF?

As I recall, the premise back then was that scopes that passed "seemed" to correlate with makes/models that anecdotally had fewer field failures under consistent/regular use, as detected by an astute shooter. And some makes, like SFWA, had corroborating warranty/service data. But, the severity was an issue then and may still be. I recall Carl Ross breaking some scopes. I had pieces detach from scopes, and some make godawful noises at impact!

However, I don't recall anyone making claims that scopes that "failed" a drop eval, were automatically doomed to lose zero or track erratically. In other words, the "evals" could be considered an overload test for a gut check on robustness. And then one could speculate that it would result in longer service life, but a less robust scope might be fine for many users. I think that approach jives with what both of you are saying.

However, the Rokslide criteria has morphed. From the Scope Field Eval Explanation and Standards thread:

"It’s also noticed that scopes that don’t hold zero from 18” drops fail at an alarming rate from just normal, non abusive use. To the point that if it won’t hold zero from multiple 18” drops it’s known that it will lose zero or fail eventually, and usually quite quickly. It’s also noticed that if a scope model will consistently hold zero through the single 18” and 36” drops, but lose zero through the 3x3 36” drops, it will survive most general use without many issues, and can be a workable option, though eventually they have problems. And when a scope makes it through the whole eval, issues of any kind even with seriously horrible use are so rare as to be almost unknown."

The statements in quotes above are much more definitive, and are used to support the performance criteria being used now at Rokslide. I'm not sure what Ryan and Formi have for sample sizes, or if extrapolation has been done, but I can see where some people get skeptical. Especially if they have a scope that functions fine under various conditions, without failing at an alarming rate, even if it technically fails the evals here.
 
Back
Top