Is it all Leopolds

Joined
Sep 11, 2017
Messages
1,510
Location
Bozeman, MT
While true if your goal is to find or know if there is/was a failure. However the rate of failure is vastly more important then if something has failed, as we know all scopes fail already.

Some good info on this page but you have to understand what you are looking at. Even the drop test that people are taking as gold, have a lot to be desired. For one the test would really need to be repeated the same way for each scope x amount of times to make the test relevant.

The info WindGypsy dropped is about as good as you’re going to get for rate of failure. An instructor for a sniper school that has mostly leupolds on issued weapon systems seeing between 5:1 and 10:1 failure rate. I’d love to see more numbers. How many scopes has he seen come through the school? Hundreds of scopes? Thousands? Regardless, a 5:1 -10:1 failure rate matches up with reports from serious users across every hunting/shooting forum on the internet for YEARS.

There’s a bunch of people suggesting that the type of tracking failure some of us have seen and repeated in our leupolds don’t really matter for hunters. You posted in the Long Range forum, so I expect you are intending to use this for a long range tool. IMO there’s no amount of tracking failure that’s acceptable. The whole reason we’re anal about our gear is to take variables off the table. Why take the chance?


Sent from my iPhone using Tapatalk
 
OP
N

Nards444

FNG
Joined
Aug 30, 2023
Messages
69
The info WindGypsy dropped is about as good as you’re going to get for rate of failure. An instructor for a sniper school that has mostly leupolds on issued weapon systems seeing between 5:1 and 10:1 failure rate. I’d love to see more numbers. How many scopes has he seen come through the school? Hundreds of scopes? Thousands? Regardless, a 5:1 -10:1 failure rate matches up with reports from serious users across every hunting/shooting forum on the internet for YEARS.

There’s a bunch of people suggesting that the type of tracking failure some of us have seen and repeated in our leupolds don’t really matter for hunters. You posted in the Long Range forum, so I expect you are intending to use this for a long range tool. IMO there’s no amount of tracking failure that’s acceptable. The whole reason we’re anal about our gear is to take variables off the table. Why take the chance?


Sent from my iPhone using Tapatalk

Yeah I agree with most of that . And yes no pro but we hunt mid long range and keep growing.

And all this stuff is a great indicator. However we already know that any scope ever made can fail one way or another. What we really need to know is rate of failure by how much and we need to know that by which brands we are comparing. Which will never get.

The drop test here are interesting and have peeked my interest. But as skeptic in everything I read, I had a lot of questions after reading through a lot them. Were the exact same conditions used on each test, to include exact height and angle, location, same substrate, atmospheric conditions, elevation etc. was shooter error factored in, ammo reliability etc. now the guy did a pretty good job with all that, and I trust we had a lot of the same conditions even if not exact. The test would be a lot more valid in my mind of each test is done say 10 times
 

fwafwow

WKR
Joined
Apr 8, 2018
Messages
5,651
The drop test here are interesting and have peeked my interest. But as skeptic in everything I read, I had a lot of questions after reading through a lot them. Were the exact same conditions used on each test, to include exact height and angle, location, same substrate, atmospheric conditions, elevation etc. was shooter error factored in, ammo reliability etc. now the guy did a pretty good job with all that, and I trust we had a lot of the same conditions even if not exact. The test would be a lot more valid in my mind of each test is done say 10 times
This has been covered in other threads, quite a few times. Others can probably speak to these points better than me, but fwiw. The slight variance in conditions mimics field situations, but apart from temperature and the exact substrate, they are pretty similar. Repeating the test 10x is impractical - it’s done on a volunteer basis with ammo costs donated by some RS members. Are you familiar with the testing procedures?
 
Last edited:
Joined
Sep 11, 2017
Messages
1,510
Location
Bozeman, MT
Yeah I agree with most of that . And yes no pro but we hunt mid long range and keep growing.

And all this stuff is a great indicator. However we already know that any scope ever made can fail one way or another. What we really need to know is rate of failure by how much and we need to know that by which brands we are comparing. Which will never get.

The drop test here are interesting and have peeked my interest. But as skeptic in everything I read, I had a lot of questions after reading through a lot them. Were the exact same conditions used on each test, to include exact height and angle, location, same substrate, atmospheric conditions, elevation etc. was shooter error factored in, ammo reliability etc. now the guy did a pretty good job with all that, and I trust we had a lot of the same conditions even if not exact. The test would be a lot more valid in my mind of each test is done say 10 times

10 repetitions on the drop tests would be awesome, no question. Form (the guy who is behind those) isn’t a jo blo hunter…if anyone could come up with a test and run it properly, I’d trust him to do it.

That 5:1 - 10:1 failure is compared to Nightforce. They’re the other company with a contract. So not a failure rate, but a gnarly failure ratio. Imagine if a vehicle manufacturer was failing at a ratio of 5:1 or 10:1…

We don’t have an exact failure rate. But as others have pointed out, most Leupold hunting scopes are going onto guns that get shot a handful of times a year, at distances that wouldn’t be noticed if there was a failure. Rokslide and other hunting/shooting forums are collecting the most avid of our respective sports/disciplines. I think that’s why we see these issues crop up on forums like Rokslide. We’re a large group of people who are actually testing and paying enough attention to our gear to notice something like this. And it seems like as soon as a person who knows what they’re doing starts to pay attention, they’re seeing some issues. That Rokslide poll certainly points in that direction.


Sent from my iPhone using Tapatalk
 
Last edited:
Joined
May 16, 2021
Messages
1,478
Location
North Texas
My wife’s Nightforce NX8 is currently back at Nightforce for wandering zero.

Guess NF can’t be trusted any more either.


Sent from my iPad using Tapatalk Pro
 

Marbles

WKR
Classified Approved
Joined
May 16, 2020
Messages
4,582
Location
AK
While true if your goal is to find or know if there is/was a failure. However the rate of failure is vastly more important then if something has failed, as we know all scopes fail already.
You are conflating two different things. Dismissing a single failure as a sample of one (what I am discussing), and wishing for better data.
Some good info on this page but you have to understand what you are looking at. Even the drop test that people are taking as gold, have a lot to be desired. For one the test would really need to be repeated the same way for each scope x amount of times to make the test relevant.
No, you are conflating relevance and statistically significant. In the lack of better data, even poor data is relevant, but we must acknowledge its weekness. Statistically significant data is what you get with a large enough sample.
Yeah I agree with most of that . And yes no pro but we hunt mid long range and keep growing.

And all this stuff is a great indicator. However we already know that any scope ever made can fail one way or another. What we really need to know is rate of failure by how much and we need to know that by which brands we are comparing. Which will never get.

The drop test here are interesting and have peeked my interest. But as skeptic in everything I read, I had a lot of questions after reading through a lot them. Were the exact same conditions used on each test, to include exact height and angle, location, same substrate, atmospheric conditions, elevation etc. was shooter error factored in, ammo reliability etc. now the guy did a pretty good job with all that, and I trust we had a lot of the same conditions even if not exact. The test would be a lot more valid in my mind of each test is done say 10 times
10? Unless the failure rate is very high, 10 would not get one to statistical significance, so it would be providing a prettier veneer to still statistically invalid data.

Cost and time is a limitting factor, the people with the money and the vested interest are the manufacturers. Their refusal to do so, gives a pretty strong data point against them when they could settle the question. Particularly large manufacturers like Leupold, Vortex, Etc.

Ignoring the available data because it doesn't meet a certain standard is simply arguing with reality, which unless one is willing to invest the effort to change really, is a waste of time.

But, I got to tell you, short of a double blind RCT powered to find equivalence at p 0.01 and using a robot in a tunnel to shoot, one can find issues in whatever data is available. The goal posts will just be moved to let people hold onto what they want to believe.

What needs to happen is scope makers need to establish a test standard and pay to have products tested and certified. Until they do that, I have no problem trashing any manufacture with suggestion of high failure rates in the available data.
 
Joined
May 16, 2021
Messages
1,478
Location
North Texas
This was pretty well covered by someone else.
but yea. Remove sponsorships. Remove prize table scopes. Then the "data" starts to look different.
Then you are showing you have not been to many or any PRS matches as those guys re zero before every match. Isnt the definition of having to re zero the fact that the scope lost zero at some point.

So if competition shooters make their living by winning (per a previous poster a few posts back), and the scopes are such crap, then why would they continue to use them sponsored or not?

Marketing loves to brag about winning and if you aren’t winning, you’ll be dumped like a hot potato by the company sponsoring you so I don’t see how this argument holds water.


Sent from my iPad using Tapatalk Pro
 

mxgsfmdpx

WKR
Joined
Oct 22, 2019
Messages
6,259
Location
Outside
@Nards444 at the end of the day, this is a “take it out into the field and use it” kind of forum.

If you’re wanting to buy a Leupold, have at it and let us know how it goes. Include detailed use results if you could. That’s what this forum is all about.
 

mxgsfmdpx

WKR
Joined
Oct 22, 2019
Messages
6,259
Location
Outside
So if competition shooters make their living by winning (per a previous poster a few posts back), and the scopes are such crap, then why would they continue to use them sponsored or not?

Marketing loves to brag about winning and if you aren’t winning, you’ll be dumped like a hot potato by the company sponsoring you so I don’t see how this argument holds water.


Sent from my iPad using Tapatalk Pro
If you can’t decipher that there is a large difference in use case between a PRS shooter and back country backpack style hunter, then you don’t have a large grasp on how either of those things work.
 
Joined
Jun 12, 2019
Messages
1,726
So if competition shooters make their living by winning (per a previous poster a few posts back), and the scopes are such crap, then why would they continue to use them sponsored or not?
Because the way the scopes are "crap" is irrelevant to PRS shooters. If their zero gets bumped during travel or whatever it doesn't matter since they're always able to re-zero before the match starts. And then once the match starts unless something goes fairly wrong their scope won't be bumped again.
 
Joined
Sep 11, 2017
Messages
1,510
Location
Bozeman, MT
So if competition shooters make their living by winning (per a previous poster a few posts back), and the scopes are such crap, then why would they continue to use them sponsored or not?

Marketing loves to brag about winning and if you aren’t winning, you’ll be dumped like a hot potato by the company sponsoring you so I don’t see how this argument holds water.


Sent from my iPad using Tapatalk Pro

Competition shooters don’t make their living by winning. They make their living by sponsors. One guy makes his living by winning.


Sent from my iPhone using Tapatalk
 

Weldor

WKR
Joined
Apr 20, 2022
Messages
1,937
Location
z
Excuse my ignorance, but don't PRS shooters hit a steel plate a certain size per yardage? not a 1" bullseye? wondering zero of a certain amount would not seem as critical? Please educate me.
 

Reburn

Mayhem Contributor
Joined
Feb 10, 2019
Messages
3,494
Location
Central Texas
if you aren’t winning, you’ll be dumped like a hot potato by the company sponsoring you so I don’t see how this argument holds water.

Or you just sponsor enough of the top 20 that your almost guarenteed someone will podium with one of your scopes.

Plus the Contingency money if you win.

Did you miss the point they rezero everyday?

you think those guys make a living shooting PRS?
 

Macintosh

WKR
Joined
Feb 17, 2018
Messages
2,887
People are missing a critical element of the statistics in their insistence in a large sample size. if a scope has a 0.5% failure rate (so in “truly scientific” testing it would fail .5% of the time, or 5 times out of 1000) do you know what the calculated odds of having TWO tested scopes in a row PASS? It’s extremely high, and you would expect to require a huge number of tests to find even ONE failure. BUT, now calculate the odds of getting TWO consecutive failures in a row—it’s infinitesimally tiny, the odds are ridiculously low. So if you test 2 scopes and BOTH of them fail, statistically that is much, much, much, MUCH less likely than passing twice in a row. And if you did it three times in a row…well, with a truly low failure rate it simply would be a “more than one in a gazillion” fluke. So if you test a couple scopes and have multiple failures…you cannot quantify the failure rate, but you can say pretty confidently there is a problem. Statistically speaking the low sample size with very high failure rate is far more relevant than people are giving it credit for.

You know the saying “a good plan now is better than a perfect plan tomorrow”? The corollary to that is “some data now is better than perfect data tomorrow”.
 
Last edited:

Ucsdryder

WKR
Joined
Jan 24, 2015
Messages
6,775
@Nards444 at the end of the day, this is a “take it out into the field and use it” kind of forum.

If you’re wanting to buy a Leupold, have at it and let us know how it goes. Include detailed use results if you could. That’s what this forum is all about.
In theory I think you’re spot on. But I also think there are a lot of people on here that parrot others. The amount of people that are killing 0-1 animals a year but speaking as experts is way too high.

I won’t buy a leupold, but that’s because they effed me on their warranty on a rangefinder. I will buy another LHT tho! 😜

Edit to add…not directing that toward you!
 

ianpadron

WKR
Joined
Feb 3, 2016
Messages
2,012
Location
Montana
I used to get Leupold scopes for free. I used them for comps, hunts etc. I just "accepted" that I had to re-zero my gun often, it was just part of shooting. I even recommended them to shooters as a viable option up until a few years ago...

Two separate VX series scopes failed me on a couple of really hard hunts, worked my ass off, and missed two good animals in an area that isn't easy to kill higher class animals. Went down the mountain and checked zero and it was off, on both occasions. Both were sent back to Leupold and "repaired" and sent back. I thought that this just can't be the right way to go about this.

Once I started mounting my scopes per Form's method posted here, and started using models that have gone through the full evaluation, like magic, constant "re-zeroing" disappeared. I have some guns with multiple thousands of rounds on them each with Maven RS1.2s, SWFA 3-9s. and SWFA Fixed 6's that haven't had to been re-zeroed even once.

Nobody is "attacking" Leupold, or "following a cult" because they are trying to keep folks from living like I did for decades.
Mirrors my experience. If that makes us cult members, whatever 🤣

It's hard to describe the peace of mind that has come from knowing I can rely on my shooting systems no matter what.

Rifles and scopes are known quantities and predictable, but most guys aren't willing to put in the front end work to learn what they're actually trying to achieve with their setups.

I had a VX3i that I thought I broke in half during a packout a few years back. Actually held zero and was no worse from the wear. I've also owned an SWFA that didn't track right out of the box. But that doesn't mean that the mountain of data indicating the opposite is untrue...the cognitive dissonance in these threads is always mind numbing to me.

Put me down for one of those 8x50s if they ever get made 😆
 

ianpadron

WKR
Joined
Feb 3, 2016
Messages
2,012
Location
Montana
People are missing a critical element of the statistics in their insistence in a large sample size. if a scope has a 0.5% failure rate (so in “truly scientific” testing it would fail .5% of the time, or 5 times out of 1000) do you know what the calculated odds of having TWO tested scopes in a row PASS? It’s extremely high, and you would expect to require a huge number of tests to find even ONE failure. BUT, now calculate the odds of getting TWO consecutive failures in a row—it’s infinitesimally tiny, the odds are ridiculously low. So if you test 2 scopes and BOTH of them fail, statistically that is much, much, much, MUCH less likely than passing twice in a row. And if you did it three times in a row…well, with a truly low failure rate it simply would be a “more than one in a gazillion” fluke. So if you test a couple scopes and have multiple failures…you cannot quantify the failure rate, but you can say pretty confidently there is a problem. Statistically speaking the low sample size with very high failure rate is far more relevant than people are giving it credit for.

You know the saying “a good plan now is better than a perfect plan tomorrow”? The corollary to that is “some data now is better than perfect data tomorrow”.
Dang someone who actually understands why the drop tests are meaningful, and statistically relevant...refreshing!
 

freddyG

WKR
Joined
Jan 25, 2020
Messages
382
It’s less than perfect but much better than “some random internet guys say leupold scopes won’t hold zero”. With no empirical evidence of their setups or the myriad of other variables that could be at play.

They shoot long range and are tough on equipment. I’d trust socom using leupolds for 25 years more than I’d trust some guy that obviously doesn’t like leupolds.
This post reads as it’s from someone who is in denial. It’s not that people don’t like leupold, it’s just that serious shooters don’t trust them.

There is a reason for the early BR crowd freezing scopes, and using external adjustments. It wasn’t because scopes were dependable.

All it takes for the non believers to be awakened, is to purchase a scope checker, and test their own stuff. No drop tests needed. The reason for very few doing this, is they already know the answer, but are in denial. They are financially or emotionally invested in junk gear.
 
Top