Is it all Leopolds

Joined
Mar 22, 2024
Messages
70
This post reads as it’s from someone who is in denial. It’s not that people don’t like leupold, it’s just that serious shooters don’t trust them.

There is a reason for the early BR crowd freezing scopes, and using external adjustments. It wasn’t because scopes were dependable.

All it takes for the non believers to be awakened, is to purchase a scope checker, and test their own stuff. No drop tests needed. The reason for very few doing this, is they already know the answer, but are in denial. They are financially or emotionally invested in junk gear.
The issue people take is that last part. Junk gear. You don’t like it/had a bad experience. That’s fine. But leupold has more match wins of every type of long range shooting over the years than any other brand. That doesn’t happen with junk. I like leupold. They’ve performed great for me from Alaska to Arizona out to 1200 yards and I prefer them to the nightforce and vortex stuff I’ve owned. The difference though is I don’t call vortex and NF junk gear. I liked the leupold glass better than the NF I had. I had a vortex fail on me. But those companies overall make incredibly good, top notch gear. My limited experience with them is anecdotal.
 
OP
N

Nards444

FNG
Joined
Aug 30, 2023
Messages
58
This has been covered in other threads, quite a few times. Others can probably speak to these points better than me, but fwiw. The slight variance in conditions mimics field situations, but apart from temperature and the exact substrate, they are pretty similar. Repeating the test 10x is impractical - it’s done on a volunteer basis with ammo costs donated by some RS members. Are you familiar with the testing procedures?
As I said above I have read them. And yes the variance mimics field conditions, the error here is though if the ground is say x amount harder and the temp is 30 degrees colder and the drop is done even half an inch higher it could trigger a fault one scope and maybe not another. While variance is good when testing one scope how it performs in different conditions, variance isn’t good when trying to compare it to others.

Actually testing this thousands of times is practical even 10 isnt good enough, that’s what manufactures do. But yes this is a guy on the side doing and spending his life doing this, it wouldn’t be practical for for him. However one test in uncontrolled conditions also makes the results interesting but impractical to come to a conclusion
 
Last edited:

Reburn

Mayhem Contributor
Joined
Feb 10, 2019
Messages
3,430
Location
Central Texas
The issue people take is that last part. Junk gear. You don’t like it/had a bad experience. That’s fine. But leupold has more match wins of every type of long range shooting over the years than any other brand. That doesn’t happen with junk.

Ford made 3.1 million pintos. People use junk for all kind of reasons. Doesnt mean its not junk.

keep using junk scopes. Everyone here is fine with it. Sooner or later it will probably cost you.
 
OP
N

Nards444

FNG
Joined
Aug 30, 2023
Messages
58
People are missing a critical element of the statistics in their insistence in a large sample size. if a scope has a 0.5% failure rate (so in “truly scientific” testing it would fail .5% of the time, or 5 times out of 1000) do you know what the calculated odds of having TWO tested scopes in a row PASS? It’s extremely high, and you would expect to require a huge number of tests to find even ONE failure. BUT, now calculate the odds of getting TWO consecutive failures in a row—it’s infinitesimally tiny, the odds are ridiculously low. So if you test 2 scopes and BOTH of them fail, statistically that is much, much, much, MUCH less likely than passing twice in a row. And if you did it three times in a row…well, with a truly low failure rate it simply would be a “more than one in a gazillion” fluke. So if you test a couple scopes and have multiple failures…you cannot quantify the failure rate, but you can say pretty confidently there is a problem. Statistically speaking the low sample size with very high failure rate is far more relevant than people are giving it credit for.

You know the saying “a good plan now is better than a perfect plan tomorrow”? The corollary to that is “some data now is better than perfect data tomorrow”.
I think like many of said we agree there is a problem. The real question is how big of a problem and does it matter? We will never know, because these test won’t give us that nor will manufactures.

What your saying is true but it’s also like saying I won the lotto while true, it doesn’t mean I can repeat it again statistically. Nor does it correlate to anybody else winning the lotto while.

Again. Great stuff and test. It’s just far from conclusive
 

fwafwow

WKR
Joined
Apr 8, 2018
Messages
5,552
L
As I said above I have read them. And yes the variance mimics field conditions, the error here is though if the ground is say x amount harder and the temp is 30 degrees colder and the drop is done even half an inch higher it could trigger a fault one scope and maybe not another. While variance is good when testing one scope how it performs in different conditions, variance isn’t good when trying to compare it to others.

Actually testing this thousands of times is practical even 10 isnt good enough, that’s what manufactures do. But yes this is a guy on the side doing and spending his life doing this, it wouldn’t be practical for for him. However one test in uncontrolled conditions also makes the results interesting but impractical to come to a conclusion
I don’t disagree with any of that. Unfortunately no manufacturers do any sort of testing that’s even close to what this is trying to accomplish. Some say they test for G forces (Tract - but I’ve never been able to find any details, and I think Leupold has a video of something like that, but short on info). If there are tests out there, I’m unaware of them.

So for me the tests fit the cliche of don’t let perfect be the enemy of good.
 
OP
N

Nards444

FNG
Joined
Aug 30, 2023
Messages
58
L

I don’t disagree with any of that. Unfortunately no manufacturers do any sort of testing that’s even close to what this is trying to accomplish. Some say they test for G forces (Tract - but I’ve never been able to find any details, and I think Leupold has a video of something like that, but short on info). If there are tests out there, I’m unaware of them.

So for me the tests fit the cliche of don’t let perfect be the enemy of good.

Across the board you won’t find manufactures release test results on torture tests on just about anything. Wish more would and if I was manufacture and was confident in my product I would.
 
Top