General RE LEE
WKR
- Joined
- Dec 28, 2019
- Messages
- 1,876
Anybody ever quantify lost production/revenue for employees on Rokslide?
I'm on a conference call...multi-tasking.Anybody ever quantify lost production/revenue for employees on Rokslide?
I agree, people should be testing their own gear. It's really the only way to know how it's going to perform.That's the point of people doing this on their own, with their own stuff, and their own shooting skill. I think it is pretty eye opening to take scope A, B, and C on the same day, same procedure, same rifle, same ammo, same shooter, etc., and see how they differ in their results.
It also shouldn't cause damage to anything and I think the evidence to that is when some scope brands have inspected their scopes afterward and said all is well.
This is where I get hung up on folks from both sides of the topic. Just go test your stuff and see what you learn rather than typing about it.
I get what you're saying, and I understand statistical sampling and significance. No one is arguing that the drop tests should be the highest authority here.I get what you're trying to say but I don't think it's a very good analogy to these scope tests. The fact is that hundreds if not thousands of people have burned up those light duty trucks doing exactly what you described, that's how we know it's a bad idea to put a 10000lb load on a ranger powertrain. For analogy to be equivalent, you'd need more data.
All I'm saying is that one example does not make a statisically important revelation. The tested scope could be an anomaly...or not. But 1 test doesn't tell you anything about the toughness of a line of scopes, it just doesn't. I'm not sure how that's controversial.
If a guy wants to buy or not buy based on a single internet test that they weren't present for, go for it. It's not my money.
If their QC is so bad that different scopes of the same model vary in reliability idk why you would buy one.Lol. If you say so. It's a mechanical device and anything can happen. A sample of 1 means nothing. But by all means. You do you.
That's the rub. Testing one scope doesn't necessarily tell you much about QC. Anyone can put out a lemon. It happens. It may be rare but it happens, especially if we take into account volume. Some manufacturers produce much, much more than others. That would by default mean more bad units given a similar failure rate.If their QC is so bad that different scopes of the same model vary in reliability idk why you would buy one.
I don't disagree, but you're discounting volume. How many VX3s are in the wild vs SWFA 6Xs?I get what you're saying, and I understand statistical sampling and significance. No one is arguing that the drop tests should be the highest authority here.
Single drop tests are better at identifying poorly-designed scopes than they are at proving the best-designed scopes. Humor me with some simple math.
- Scope A has a 1% failure rate (meaning 1 of 100 units from the factory won't pass drop test). The probability of you receiving a scope that fails the drop test is low. You can test several scopes and it's unlikely you'll ever get a dud. You can pretty confidently rule that this scope isn't poorly-designed, but you can't prove (or speak to how well designed it is) until you start increasing sample size.
- But what happens if you do beat the odds and get a failed scope? Well, test another one. The odds of two failing back-to-back are 0.01%. The odds of three failing back to back are 0.0001%.
- Scope B has a 99% failure rate. The probability of you receiving a scope that will fail the test is high. You can test several scopes and it's unlikely you'll ever get a pass. You can pretty quickly rule that this scope is poorly-designed.
(Of course, all of this hinges on the definition of design success being able to pass the drop test. I'd argue that yes, a scope should be able to keep zero after a handful of drops on soft padding. But I know others will disagree with that.)
We arent talking about pulling a random, nameless scope out of a bag while blind folded and testing it in a vacuum. To just pound the table about statistics is overly simplistic.All I'm saying is that one example does not make a statisically important revelation. The tested scope could be an anomaly...or not. But 1 test doesn't tell you anything about the toughness of a line of scopes, it just doesn't. I'm not sure how that's cocontroversial.
You don't end up with "more chances to end up with a bad unit." That's not how probability works. You still have exactly the 1% chance (i.e., probability) of ending up with the bad unit.But let's just say a 1% failure rate on 100000 units versus 1% on 10000 units. 900 more chances to end up with a bad unit. They have the same failure rate so is one bad or do you just have a greater chance at that 1%.
But let's just say a 1% failure rate on 100000 units versus 1% on 10000 units.
You're correct. Poor phrasing on my part.You don't end up with "more chances to end up with a bad unit." That's not how probability works. You still have exactly the 1% chance (i.e., probability) of ending up with the bad unit.
Let's say you have 100 balls in a bag, of which 1 is blue. If you reach in and randomly grab one, you have a 1% chance of grabbing blue.
Now let's say you have 1,000 balls in a bag, of which 10 are blue. If you reach in and randomly grab one, you have a 1% chance of grabbing blue.
Yes, there are 10x more blue balls in the larger bag. But there are also 10x more non-blue balls.
Blue balls?Yes, there are 10x more blue balls in the larger bag. But there are also 10x more non-blue balls.
No worries. So then the “factor in volume” argument falls apart, as we’re dealing with probabilities.You're correct. Poor phrasing on my part.
Sure, I'll concede that point. You changed my mind. And while your initial principle stands, that doesn't mean these tests are actually good at identifying poorly designed scopes.No worries. So then the “factor in volume” argument falls apart, as we’re dealing with probabilities.
Which goes back to the initial principle:
- Single drop tests are better at identifying poorly-designed scopes than they are at proving the best-designed scopes
Sure, I'll concede that point. You changed my mind. And while your initial principle stands, that doesn't mean these tests are actually good at identifying poorly designed scopes.
I do believe volume matters but perhaps not in the way I initially thought.
Wow, I'm convinced. Great argument!The test is excellent for identifying poorly designed scopes.
You may not like the answer, but the mental gymnastics you are going through to try to convince yourself and others is telling.
sometimes people have to hear they are wrong from multiple people for them to pause and think about something.Wow, I'm convinced. Great argument!