Why cant people accept the fact that some people dont need a drop tested scope?

Ratbeetle

WKR
Joined
Jul 20, 2018
Messages
1,141
That's the point of people doing this on their own, with their own stuff, and their own shooting skill. I think it is pretty eye opening to take scope A, B, and C on the same day, same procedure, same rifle, same ammo, same shooter, etc., and see how they differ in their results.

It also shouldn't cause damage to anything and I think the evidence to that is when some scope brands have inspected their scopes afterward and said all is well.

This is where I get hung up on folks from both sides of the topic. Just go test your stuff and see what you learn rather than typing about it.
I agree, people should be testing their own gear. It's really the only way to know how it's going to perform.
 

atmat

WKR
Joined
Jun 10, 2022
Messages
3,225
Location
Colorado
I get what you're trying to say but I don't think it's a very good analogy to these scope tests. The fact is that hundreds if not thousands of people have burned up those light duty trucks doing exactly what you described, that's how we know it's a bad idea to put a 10000lb load on a ranger powertrain. For analogy to be equivalent, you'd need more data.

All I'm saying is that one example does not make a statisically important revelation. The tested scope could be an anomaly...or not. But 1 test doesn't tell you anything about the toughness of a line of scopes, it just doesn't. I'm not sure how that's controversial.

If a guy wants to buy or not buy based on a single internet test that they weren't present for, go for it. It's not my money.
I get what you're saying, and I understand statistical sampling and significance. No one is arguing that the drop tests should be the highest authority here.

Single drop tests are better at identifying poorly-designed scopes than they are at proving the best-designed scopes. Humor me with some simple math.

  • Scope A has a 1% failure rate (meaning 1 of 100 units from the factory won't pass drop test). The probability of you receiving a scope that fails the drop test is low. You can test several scopes and it's unlikely you'll ever get a dud. You can pretty confidently rule that this scope isn't poorly-designed, but you can't prove (or speak to how well designed it is) until you start increasing sample size.
    • But what happens if you do beat the odds and get a failed scope? Well, test another one. The odds of two failing back-to-back are 0.01%. The odds of three failing back to back are 0.0001%.

  • Scope B has a 99% failure rate. The probability of you receiving a scope that will fail the test is high. You can test several scopes and it's unlikely you'll ever get a pass. You can pretty quickly rule that this scope is poorly-designed.

(Of course, all of this hinges on the definition of design success being able to pass the drop test. I'd argue that yes, a scope should be able to keep zero after a handful of drops on soft padding. But I know others will disagree with that.)
 

Ratbeetle

WKR
Joined
Jul 20, 2018
Messages
1,141
If their QC is so bad that different scopes of the same model vary in reliability idk why you would buy one.
That's the rub. Testing one scope doesn't necessarily tell you much about QC. Anyone can put out a lemon. It happens. It may be rare but it happens, especially if we take into account volume. Some manufacturers produce much, much more than others. That would by default mean more bad units given a similar failure rate.

I don't know, I guess I'm more willing to chance it and test my own stuff.
 

Ratbeetle

WKR
Joined
Jul 20, 2018
Messages
1,141
I get what you're saying, and I understand statistical sampling and significance. No one is arguing that the drop tests should be the highest authority here.

Single drop tests are better at identifying poorly-designed scopes than they are at proving the best-designed scopes. Humor me with some simple math.

  • Scope A has a 1% failure rate (meaning 1 of 100 units from the factory won't pass drop test). The probability of you receiving a scope that fails the drop test is low. You can test several scopes and it's unlikely you'll ever get a dud. You can pretty confidently rule that this scope isn't poorly-designed, but you can't prove (or speak to how well designed it is) until you start increasing sample size.
    • But what happens if you do beat the odds and get a failed scope? Well, test another one. The odds of two failing back-to-back are 0.01%. The odds of three failing back to back are 0.0001%.

  • Scope B has a 99% failure rate. The probability of you receiving a scope that will fail the test is high. You can test several scopes and it's unlikely you'll ever get a pass. You can pretty quickly rule that this scope is poorly-designed.

(Of course, all of this hinges on the definition of design success being able to pass the drop test. I'd argue that yes, a scope should be able to keep zero after a handful of drops on soft padding. But I know others will disagree with that.)
I don't disagree, but you're discounting volume. How many VX3s are in the wild vs SWFA 6Xs?

I don't know production numbers, nor do I really care to, I'm way deeper into this than I planned to be. 10 times as many...probably more but it doesn't really matter.

But let's just say a 1% failure rate on 100000 units versus 1% on 10000 units. 900 more chances to end up with a bad unit. They have the same failure rate so is one bad or do you just have a greater chance at that 1%.

Again, I'm just saying one test doesn't mean much to me. If other guys find it helpful, great.
 

5811

WKR
Joined
Jan 25, 2023
Messages
623
All I'm saying is that one example does not make a statisically important revelation. The tested scope could be an anomaly...or not. But 1 test doesn't tell you anything about the toughness of a line of scopes, it just doesn't. I'm not sure how that's cocontroversial.
We arent talking about pulling a random, nameless scope out of a bag while blind folded and testing it in a vacuum. To just pound the table about statistics is overly simplistic.

We know scopes engineered and produced by companies with a certain level of durability in mind, tend to pass. They are designed to. They don't pass by accident.

Companies that don't care to design and manufacture scopes to that level of durability, don't make scopes that pass. If you think all scope companies have the same emphasis on the same durability standards, I would disagree.

These things are not insignificant when drawing conclusions. Some pens are designed to be able to write upside down or in zero gravity. Some aren't. I don't need to test thousands of each to prove the designed outcomes.

If you want to say you don't need to write upside down, fine. I don't either. But that doesn't mean the difference doesn't exist. Similarly, not testing 100 of each also doesn't mean they are the same.
 

atmat

WKR
Joined
Jun 10, 2022
Messages
3,225
Location
Colorado
But let's just say a 1% failure rate on 100000 units versus 1% on 10000 units. 900 more chances to end up with a bad unit. They have the same failure rate so is one bad or do you just have a greater chance at that 1%.
You don't end up with "more chances to end up with a bad unit." That's not how probability works. You still have exactly the 1% chance (i.e., probability) of ending up with the bad unit.

Let's say you have 100 balls in a bag, of which 1 is blue. If you reach in and randomly grab one, you have a 1% chance of grabbing blue.

Now let's say you have 1,000 balls in a bag, of which 10 are blue. If you reach in and randomly grab one, you have a 1% chance of grabbing blue.

Yes, there are 10x more blue balls in the larger bag. But there are also 10x more non-blue balls.
 

Ratbeetle

WKR
Joined
Jul 20, 2018
Messages
1,141
You don't end up with "more chances to end up with a bad unit." That's not how probability works. You still have exactly the 1% chance (i.e., probability) of ending up with the bad unit.

Let's say you have 100 balls in a bag, of which 1 is blue. If you reach in and randomly grab one, you have a 1% chance of grabbing blue.

Now let's say you have 1,000 balls in a bag, of which 10 are blue. If you reach in and randomly grab one, you have a 1% chance of grabbing blue.

Yes, there are 10x more blue balls in the larger bag. But there are also 10x more non-blue balls.
You're correct. Poor phrasing on my part.
 
Joined
Jun 17, 2016
Messages
1,316
Location
ID
I have my regimen for verifying turrets, adjustments, return to zero, etc. Never drop tested my scope. Not a horrible idea. I have tripped/fell a few times. I hunt back country. Might be nice to know my scope can take a beating and still hold zero, not for my sake, but for an ethical shot.
 

atmat

WKR
Joined
Jun 10, 2022
Messages
3,225
Location
Colorado
You're correct. Poor phrasing on my part.
No worries. So then the “factor in volume” argument falls apart, as we’re dealing with probabilities.

Which goes back to the initial principle:
  • Single drop tests are better at identifying poorly-designed scopes than they are at proving the best-designed scopes
 

Ratbeetle

WKR
Joined
Jul 20, 2018
Messages
1,141
No worries. So then the “factor in volume” argument falls apart, as we’re dealing with probabilities.

Which goes back to the initial principle:
  • Single drop tests are better at identifying poorly-designed scopes than they are at proving the best-designed scopes
Sure, I'll concede that point. You changed my mind. And while your initial principle stands, that doesn't mean these tests are actually good at identifying poorly designed scopes.

I do believe volume matters but perhaps not in the way I initially thought.
 

Tod osier

WKR
Joined
Sep 11, 2015
Messages
1,717
Location
Fairfield County, CT -> Sublette County, WY
Sure, I'll concede that point. You changed my mind. And while your initial principle stands, that doesn't mean these tests are actually good at identifying poorly designed scopes.

I do believe volume matters but perhaps not in the way I initially thought.

The test is excellent for identifying poorly designed scopes.

You may not like the answer, but the mental gymnastics you are going through to try to convince yourself and others is telling.
 

Ratbeetle

WKR
Joined
Jul 20, 2018
Messages
1,141
The test is excellent for identifying poorly designed scopes.

You may not like the answer, but the mental gymnastics you are going through to try to convince yourself and others is telling.
Wow, I'm convinced. Great argument!
 
Top