Questioning the "gold Standard Drop Test" and the conclusions of "This scope brand does/doesn't hold zero"

Too be honest, most shooters don't think twice about scope failures unless they frequent the Rokslide optics forum or have had a personal unforgivable failure. Most assume that all scopes will work as advertised and if they have problems, just send back for warranty. Well, warranty doesn't feed the bulldog when you have a failure in the backcountry on a "hunt of a lifetime" as my wife calls them. People assume, and rightly so, that the food they eat won't make them sick or worse. Kinda like a scope doing what it is advertised to do.

Most scopes are bought for a combination of factors: influencers, price, purpose, friends or acquaintances have one, warranty or sales person and whatever other reason I am missing. Durability in case of "shit happens" generally hasn't been one of the factors considered by a vast majority of rifle scope purchasers.

A good chunk of the rifle hunters who spend time on Rokslide, thanks to Form and others, realize that durability has been a much overlooked feature. Look at the S2H scope, durability was an absolute priority, probably the overriding priority for good reason. Is the measuring stick perfect, probably not, but all indications are that it generally works with even a small sample.
 
Bingo. So let's beat the shit out of our optics with a specified protocol - consolidate and publish our data, then send it back to the manufacturer for warranty. Then sell it, and buy the scopes that hold up to the highest impact data that fits their application. Force the companies to start listening - or better - who gives a shit, we just support the ones who do.

Seriously.

(I know this is a long game strategy, and I know this is what we're at least getting a taste of from the work already done).
I think you give these hunting/shooting forums too much credit. IMO the optics sellers don't give a rip about them because 95+% of all hunting kills are under 300 yards, making all of this LR dialing, etc mostly irrelevant. Leupold and Vortex sell millions of $$$ of scopes in spite of LDS and VDS.
 
Too be honest, most shooters don't think twice about scope failures unless they frequent the Rokslide optics forum or have had a personal unforgivable failure. Most assume that all scopes will work as advertised and if they have problems, just send back for warranty. Well, warranty doesn't feed the bulldog when you have a failure in the backcountry on a "hunt of a lifetime" as my wife calls them. People assume, and rightly so, that the food they eat won't make them sick or worse. Kinda like a scope doing what it is advertised to do.

Most scopes are bought for a combination of factors: influencers, price, purpose, friends or acquaintances have one, warranty or sales person and whatever other reason I am missing. Durability in case of "shit happens" generally hasn't been one of the factors considered by a vast majority of rifle scope purchasers.

A good chunk of the rifle hunters who spend time on Rokslide, thanks to Form and others, realize that durability has been a much overlooked feature. Look at the S2H scope, durability was an absolute priority, probably the overriding priority for good reason. Is the measuring stick perfect, probably not, but all indications are that it generally works with even a small sample.
Too be honest, most shooters don't think twice about scope failures unless they frequent the Rokslide optics forum or have had a personal unforgivable failure. Most assume that all scopes will work as advertised and if they have problems, just send back for warranty. Well, warranty doesn't feed the bulldog when you have a failure in the backcountry on a "hunt of a lifetime" as my wife calls them. People assume, and rightly so, that the food they eat won't make them sick or worse. Kinda like a scope doing what it is advertised to do.

Most scopes are bought for a combination of factors: influencers, price, purpose, friends or acquaintances have one, warranty or sales person and whatever other reason I am missing. Durability in case of "shit happens" generally hasn't been one of the factors considered by a vast majority of rifle scope purchasers.

A good chunk of the rifle hunters who spend time on Rokslide, thanks to Form and others, realize that durability has been a much overlooked feature. Look at the S2H scope, durability was an absolute priority, probably the overriding priority for good reason. Is the measuring stick perfect, probably not, but all indications are that it generally works with even a small sample.
But I know there are enough of us who do care to make an impact. Honestly, huge credit to Form here, I think he’s individually making a huge splash.
Yes, the masses just walk up to the counter and buy what fits their budget, what looks cool, rans they recognize, or whatever the guy behind the counter says to buy without really thinking through the nuances of a well educated optics purchase. Here we all are, running down the rabbit hole. There are thousands upon thousands of views of each of the optics reviews here, of course, a fraction of the market, but still the optimism in me think, big enough to make a ripple,, important to us enough to expand on the information available.
 
To the point of the person who said the masses don't care.

How many times have you asked a buddy or acquaintance, "What kind of scope do you have on your rifle?" and they reply with the brand of the scope, and that's all they've got.

Drives me nuts.
 
but it seems like two obvious ways to standardize the test could be to 1) use the same set of "bombproof" rings (whichever brand that is for both 1" and 30mm scopes), and 2) use the same short and/or long action rifle. Then at least the only thing changing among tests is the scope...

Tell us you don’t know anything about the scope evals, without saying you don’t know anything about the scope evals.
 
Agree, agree.
I see (now much more than before) the field 'validity' of the drop test. Though internally I grind with the point of impact variation; ie the luck that a strike was positioned to cause variance in zero vs another that didn't. IDK, overthinking, but that's what I'm here for.

Without looking- line out what the scope eval is, from start to finish.
 
To the point of the person who said the masses don't care.

How many times have you asked a buddy or acquaintance, "What kind of scope do you have on your rifle?" and they reply with the brand of the scope, and that's all they've got.

Drives me nuts.
Just as valid or moreso as buying a scope because some guy on the internet said so isn't it?
 
…with a test prone to error.
The critical point I think—but IS IT? You have pointed out a few places the eval method could be more consistent. But you havent shown anything to suggest the results are actually “prone to error”, I think this is simply an assumption on your part. I agree the variation in surface and the lack of control on exactly what point takes the impact is not ideal and begs question…yet, when the evals are performed multiple times, by multiple people, on different individual scopes, the actual results seem to be quite repeatable. So what evidence do you have that the results are prone to error?


Bingo. So let's beat the shit out of our optics with a specified protocol - consolidate and publish our data, then send it back to the manufacturer for warranty. Then sell it, and buy the scopes that hold up to the highest impact data that fits their application. Force the companies to start listening - or better - who gives a shit, we just support the ones who do.

Seriously.

(I know this is a long game strategy, and I know this is what we're at least getting a taste of from the work already done).
How exactly does this suggestion differ from whats already being done?

Seriously, I dont disagree that the eval protocol could be more standardized without adding complexity. Standardizing the drop surface to a specific foam/thickness on top of a standard surface would be one way. But what Im ultimately left with is, until there is a better option, of course I’ll use the drop evals (and verify on my own), because I have nothing to lose and everything to gain. (Eta: because I personally have had multiple scope failures where it is clear the scope itself failed in normal use, so doing nothing is an unacceptable solution for me)

So…do you have a better option? I think the eval vs no eval thing is beat to death. As is the “eval could be better” thing. Maybe there’s a conversation to be had if you have specific, actionable suggestions on exactly how to make it better with the resources available now? If so, fire away.
 
I mean the whole point is to show if scopes can hold up to field wear and tear and reward companys who make a product that does via purchases . I believe people have reached out to certain companies and have been ignored… hence why s2H partnership to make a scope that works and the wide array of companies or scopes that passed which are highly recommended here

Furthermore marketing drives sales. Rokslide and maybe 1 other forum harp on drop testing maybe 70-150k users in a country with hundreds of millions of guns, the big companies to actually care less about 150k users… so a company will continue to sling scopes instead of re designing them or re tooling their machines to meet the specs of 50 thousand or so of us, at a price point we will actually spend money on…
 
1. On the contrary... that's not what I said nor am saying... I make clear the failures of the drop test are perfectly accurate for that optic; making broader generalizations from the results (positive or negative tests) requires making a lot of assumptions.
Not clear enough for my pea brain. You said
An N of one with a suspect, variable/inconsistent study method is, to that scope manufacturer, quite an injustice.
And
So positive predictive value of failure - yes, agree, obvious - but we didn't get there from an N of one - with a test prone to error.
And
In the OP, I state the test does have a 100% true positive rate to detect failure; we agree there from the get go. But whats the false negative rate? Is the N of one representative of the optic line? That is my concern with the - not statistical significance - but statistical validity. If we don't have the latter, we can't know the former.

I think you are playing with words and trying to swap n for N.

The drop test is a minimum. Some scopes might get an unjust pass, but being a minimum, there are not unjust failures. Any test variation is in the manufactures favor.

For attribute data on a binomial distribution (pass/fail), if you want a 95/95 (95% CI that 95% will pass, or an allowable fail rate of 5%, a pretty low bar) then you need to test 59 samples with zero failure (C=0). If you plan for a single failure (C=1), then you need to test 93 samples. Remember for statistical validity you must establish test parameters prior to testing. So if your plan was C=0, and one failed you have to test an additional 90 (149 total) with no failure to reach 95/95. If the first two fail, you have garbage not worth testing further.

A single sample fail is statistically significant and statistically valid. A single sample pass is an anecdote likely to instill false confidence. Complaining about an n of one being unfair in the case of failure is to not understand probability. Stating that a failure cannot be generalized to the brand AND model tested is equally incorrect.

Also, you keep typing "N of one" N means the entire population, so if N is 1, then you have 100% confidence with testing the only member of the population. You are talking about n of one, because n is a sample of N.

To liken this to a clinical drug trial - not sure why we would make that analogy, seems a bit of a deviation from what it is and I struggle to follow the comparison (other than that it too is a way of testing something). Anyhow, I'll engage it as repeat that I'm merely suggesting the methods of this "phase 1 test" COULD be augmented to perhaps remove error.
Didn't you claim your scientific knowledge came from discussing medical research, which includes trials? If the analogy cannot be made or is poor, then your experience/education has no or very limited carryover. Also, I didn't say drugs, clinical trials apply to devices as well. Analogies are supposed to pull on what someone knows to help them understand something they don't.

Phase one is a small trial to establish basic safety and suitability. It is a perfectly applicable concept. A passed phase 1 does not mean very much, a failed phase one means a whole lot. Same for the drop test.
 
I remember.
“Stay home until you can’t breathe “
“Don’t take ivermectin, hydro chloroquine (sp) or any steroids. Just wait until you can’t breathe and go to the ER.”
This “protocol” was treated as gospel and any deviation from it was ridiculed.
Of course, we know now that it was actually a propaganda campaign to “prove” that other effective treatments were NOT available for Covid…which was a necessary condition for the experimental MRNA vaccines to be emergency approved. TRILLIONS OF DOLLARS AT STAKE.
Pretty much the entire medical community enthusiastically participated in this hoax.
Then the ONLY approved government/hospital approved treatment was Fauci’s pet drug REMDISIVIR…quickly followed by intubation.
It also was a hoax. It was a complete failure…pretty much a death sentence.
To die alone because family was not allowed.
So, am I still pissed off at the medical profession?
You bet I am.
So when a “medical professional” struts into a hunting forum spouting his scientific expertise and poo-pooing real life facts? Yea, that rubs me the wrong way.
Are there still good people in the medical field? Of course.
This isn’t a blanket condemnation.
But an arrogant surgeon spouting his scientific theories just sticks in my craw.
His ego just leaks thru it all…
I think your ego is also crazy just sayin
 
Dude is an ortho surgeon. Let me help.

The drop test approved scopes are Ancef. Then there’s every thing else.

We have other abx that should work, but do you trust them? Have they let you down? Did you question if that was why you had a post op infection?

You know you want Ancef no matter what.

Allergic to Ancef? Did you die? No? You get Ancef.
 
Honestly, anything coming from someone in the medical profession right now sticks in my craw. Scentific conclusions? Like that whole Covid origin and MRNA vaccine safety BS pushed by the whole medical community?
I’m sorry but anything from a doctor these days doesn’t impress me much.
No credibility.
🤡
 
Im just a simple engineer. I do have some patents and work in new product development since our credentials matter 😅

One can get in the weeds of statistical significance and controlling all of the variables and whatever else, but is that really worth the time/money/brain power? We aint got time for that boys theres money to make 💪

I'm a cheap and lazy SOB. My groups are certainly not big enough. I'm not shooting a 30 round groups to set my zero. If i shoot 5-7 and everything behaves im happy. If we go back to basic scientific report writing....

My hypothesis is that "my scope is zero'd". And every shot after that i either "fail to reject my hypothesis" or "i reject my hypothesis" and then we start over. I feel like my life is more stress-free this way 🤣

For the drop test... "i fail to reject that my scope doesnt hold zero". "My scope fails to hold zero".

I think the variability of the drop tests is the beauty of them. Any attempt to strictly control every detail of the test then in itself compromises it. Theres too many variables to try to control.

If scope x fails the drop test because it landed just perfectly with 12 degrees of pitch and 7 degrees of yaw do we care?

I dont. It failed. Random abuse in field condition has more value to me then a controlled test in a lab. It doesnt matter what lab tests ive run, the first time my tools go into an oil well im nervous. And I stay nervous until theres many months of successful runs. You just can't replicate field abuse in a lab setting. And even in industry its really hard to find the time/money to perform statistically significant testing. This is going to be a shocker im sure...but most companies do not 😅

The real questions start to come in on failed tests...was it an individual example? QC? Bad components? Bad design? Shitty material selection?

Theres not really a good way for us to determine that stuff as consumers other then compiling our anecdotes. But the drop test as conducted does seem to indicate that most brands have systemic issues with one or more of those.
 
I had a scope fail last fall. I sent said scope in for warranty and found out one of the internal lenses came loose. This wasn't a hunt of a lifetime but 2 missed shots, no meat, I left the backcountry bewildered. I found the drop test evals the most useful tool available when researching a new scope. My criteria for a new scope now included searching scopes that hold zero, scopes known to take abuse and still function as intended, ect.
I expected scopes to meet this criteria before experiencing scope failure. I was naive. The scope I was looking at failed the drop test, badly, per my recollection. Thanks to those who put in the work and dedication to compile this data. You may have saved me a future failed hunt due to optics failure! Much appreciated!!
 
Back
Top