Questioning the "gold Standard Drop Test" and the conclusions of "This scope brand does/doesn't hold zero"

I dont. It failed. Random abuse in field condition has more value to me then a controlled test in a lab. It doesnt matter what lab tests ive run, the first time my tools go into an oil well im nervous. And I stay nervous until theres many months of successful runs. You just can't replicate field abuse in a lab setting.
Agree, which is why I use my gear 100+ days a year in the field hunting, hundreds of miles in ATV, SxS, etc. and take these "controlled " tests with a grain of salt at best.

For those who take them as the gospel (which is perfectly fine by me too), why dont' you throw around your rangefinder and see if it holds up? How about your binos and/or spotter?
 
I haven’t read through this entire thread. You also have variability in the chassis/stock, scope base, scope rings, how it was mounted, torqued, how good some shooters actually are at shooting good groups. Repeatability of the drops themselves. Different shooting conditions. Then there’s sample size. You need at least 20 rounds as a baseline or it really doesn’t mean jack. If you are talking about an apples to apples test then all of the above needs to be consistent.
I also like the idea of the drop test but I put very little stock in it personally.
Have you read any of the drop test threads? There is a control rifle and control scope, there is a mounting process which is outlined, specific rings are used…. Groups are shot in specific numbers… goodness I feel like more people would deep dive on the process before they show up and accuse the process of something it is not
 
I thought the purpose all along was for people to test thier own setups. That means a simple but repeatable course of action.

If your relying on someone else to put in the work for you, you've already missed the point.

Sent from my SM-S926U using Tapatalk
 
This might be the best perspective on here. If I hit a Ford F150 with a pillow and it explodes in Looney Tunes fashion…we’re not going to spend time and money testing it further. We’re just going to buy a different truck. And we’ll be glad we did the unscientific pillow test.

He’s dropping it from 18”. Then 36”. Think about that.

I think Form is very aware of the limitations of his tests. He states it frequently. If a scope passes, he reminds everyone “Hey keep in mind, it’s just one. Here’s how I’m going to test it further. Also we need more examples.” He also tells us when variables exist. He tells us when the ground changes. “This one was on grass. This one was on frozen snow.” Him telling us that part demonstrates that he understands it’s a variable. Scopes are so crummy, it doesn’t take a very scientific test to prove it. While I agree with the OP that a more rigorous third party like uL would be awesome, the costs are extraordinary. What the OP is really getting at is, how do we get that scientific certification/lab going? The answer is exactly what Form/Avery/Rokslide is doing. The industry will eventually respond.



All of this. It’s meaningful that these scopes can’t pass these modest, limited, and plausible field tests. The failures are more meaningful than the passes.

If we were assessing value or differentiating between scopes that pass, more rigor would be needed.
 
The current evals:
*provide anyone the tools and “A” method to say whether their specific setup is likely to lose zero at some point in heavier use, or not. That includes, but is not limited to, the scope.
*provides some methodology to isolate the component that failed, in order to improve mounting process and help troubleshoot any future issues.
*Are easily done by anyone, anywhere, with nothing more than a range, some ammo, and stuff most of us already have on hand. This allows any of us to assume a level of responsibility for our own equipment and make decisions based on our own objective observations, as opposed to beliefs or what we’re told. I.e. this exists and is available to all of us; alternatives don't exist.
*are completely transparent with no proprietary info, methods, etc.
*include some “extra” measures to help reduce variables, ie a specific gun literally permanently attached to the stock, a control scope, etc, all of which is outlined in the evals even though people usually dont seem to read that part.

It strikes me that however anyone chooses to use this info or not, it offers a way to check your own equipment and work, and evaluate that based on a set of objective criteria. Whether someone chooses to purchase based on someone else’s results is almost irrelevant. Looking at “published”results and evaluating those in the context of your experience, others documented experience, multiple peoples checks, etc, and then buying based on that, may or may not result in better odds of holding zero. But it still puts you in the drivers seat as far as evaluating the risks and rewards, looking at whatever degree of evidence is there, and then—critically—taking your own responsibility for it. If you have a better idea on how to conduct the eval, anyone is free to adapt it for their own needs and discuss the positive and negative merits of doing so. But it all is predicated on being a PARTICIPANT in your own ongoing eval, not just an observer of the evals. I guess Im not sure how any of this is anything but good as a buyer and user. But when finding ways to improve the evals, I think its important that all of those^^ elements remain central.
 
I thought the purpose all along was for people to test thier own setups. That means a simple but repeatable course of action.

If your relying on someone else to put in the work for you, you've already missed the point.

Sent from my SM-S926U using Tapatalk
Like many of us have said it is a simply a starting point a nudge in the right direction… like x brand tends to have stuff that some of us have witnessed as more robust than brand, y,z,w… it doesn’t mean you have a bomb proof set up it doesn’t mean your scope won’t have a failure it is simply giving a bit of a baseline. It would be akin to going to a gunsmith having him bore sight your scope and going hunting without ever confirming your set up or zeroing it yourself. It is not gospel. I think many of us appreciate it a ton because we don’t have the equipment, the time, the energy, or the ammo to get to the baseline that has been laid out by several members of this forum. Take it for what it’s worth. It’s internet talk so if you want to ignore it and do whatever you have been doing carry on. If you want to say, “hey this makes sense, I’m going to explore this more for myself, then carry on”
 
I’m glad the guest of honor found his way to the party. To answer your question, no.., I’m not gonna bite. I’ve read a handful of your reviews, and found the aspects of your protocol that left me with skepticism about confounding variables, but, I also gathered a lot of great information. I really appreciate what you’re doing. Just thought we could discuss it, but tbh, seems like said discussion is a bit unwelcome.
You've only read a handful of reviews, as in only a couple of evaluations? Serious question, have you actually read the first few posts of the pinned thread in Long Range Hunting section that details the parameters of the scope evaluation?

I read the whole thread before replying and I can find no indication that you have actually read the published scope field evaluation protocol. To be perfectly fair, many of the people replying in this thread haven't read and understood it either.

Link (its in Lorng Range Hunting, not optics Subforum): https://rokslide.com/forums/threads/scope-field-eval-explanation-and-standards.246775/

Did you actually read the "journal article" that you claim to be critiquing?
 
I thought the purpose all along was for people to test thier own setups. That means a simple but repeatable course of action.

If your relying on someone else to put in the work for you, you've already missed the point.

Sent from my SM-S926U using Tapatalk
Isn’t the point of this forum to learn some things from others without maybe having to repeat everything? At least it is for me.
 
Isn’t the point of this forum to learn some things from others without maybe having to repeat everything? At least it is for me.
No apparently the point is to get a little snippet of info like the headline of a thread and then come in guns blazing haha “why in the world would you shoot something bigger than a wabbbit with a .223?!” Mentality
 
Isn’t the point of this forum to learn some things from others without maybe having to repeat everything? At least it is for me.

There is an inherent problem with that in this scenario though. What if you have an out of spec ring? Or a number of other things that can influence whether your setup will hold zero like shifting in the stock, using loctite that doesn't cure, etc. There is only one way to verify your system. Form gave everyone a blueprint for a simple way to test your system. People are free to take that as far as they want to. The evaluations are a guidance. They can help put you on the path but can't deliver you to the promise land. There's far too many variables.
 
I’m glad the guest of honor found his way to the party. To answer your question, no.., I’m not gonna bite.
Wait, so the person who created and performs the evaluations you started this thread to critique has engaged in the conversation. He wanted to start by getting a baseline of how well you actually understand the process you're criticizing (I think this is totally fair, several of your statements revealed what appears to be a lack of understanding) and you dismiss him? That does not indicate good faith to me.
I’ve read a handful of your reviews, and found the aspects of your protocol that left me with skepticism about confounding variables, but, I also gathered a lot of great information. I really appreciate what you’re doing. Just thought we could discuss it, but tbh, seems like said discussion is a bit unwelcome.
What is unwelcome is a straw man presentation followed by an unwillingness to engage in the first steps of correcting your mischaracterization of how the eval is conducted (whether intentional and bad faith, or honest lack of understanding)
Sorry, I didn’t intend it to be an attack on your work and valued contributions, and most, especially not a personal one. You’re going to great efforts, and I just think there’s some valuable data missing from the consumer, in general.
A sarcastic "guest of honor" comment doesn't really support that statement.
You may not be interested at all in other methods to collect that data. You may disagree that abutting is missing, and you, as many have on you’re behalf, may justify your methods to even get me to better understand WHY you employ them, and aren’t interested in an alternative. I’m all ears.
If you're all ears, why aren't you interested in getting to a baseline to start the conversation where you actually understand the process?
 
Wait, so the person who created and performs the evaluations you started this thread to critique has engaged in the conversation. He wanted to start by getting a baseline of how well you actually understand the process you're criticizing (I think this is totally fair, several of your statements revealed what appears to be a lack of understanding) and you dismiss him? That does not indicate good faith to me.

What is unwelcome is a straw man presentation followed by an unwillingness to engage in the first steps of correcting your mischaracterization of how the eval is conducted (whether intentional and bad faith, or honest lack of understanding)

A sarcastic "guest of honor" comment doesn't really support that statement.

If you're all ears, why aren't you interested in getting to a baseline to start the conversation where you actually understand the process?
Did you read through the thread, and my responses to many defending the protocol?

I don’t answer to Form, he is not an authority, if someone questions my methods, I don’t ask them to recite them, I engage with them to explain and provided reasoning, for understanding.

This shit isn’t rocket surgery. These conceptions that I lack understanding, I’m just not sure where it’s coming from. It may be to a different purpose, and thus, justification for which throughout this thread, I’ve been engaged with to better understand that justification as many have elaborated on, respectfully.

I’m not going to assume anything, but I take your response like you just skipped through pages two through six of the dialogue and just jumped your comment in without following the thread.

All ears has been that dialogue throughout the thread, aside from the personal attacks or the dismissal of me bringing this thread up as a “straw man”

‘How dare I question thou’
 
There is an inherent problem with that in this scenario though. What if you have an out of spec ring? Or a number of other things that can influence whether your setup will hold zero like shifting in the stock, using loctite that doesn't cure, etc. There is only one way to verify your system. Form gave everyone a blueprint for a simple way to test your system. People are free to take that as far as they want to. The evaluations are a guidance. They can help put you on the path but can't deliver you to the promise land. There's far too many variables.
It seems like most choose their optics based on the drop test. I know I do.

I’ve never taken it that I need to drop test every optic I have. I use forms way to mount optics and use drop test approved scopes.
 
Did you read through the thread, and my responses to many defending his protocol?
I did (though I skimmed through the parts where people were taking unfair shots at you because of your profession)

I also asked multiple questions and addressed multiple of your criticisms and you didn't engage with most of them.

Did you read the thread where the eval is fully explained?

 
A more repeatable protocol would be better
Higher sample size would be better

But to me the problem isn't understanding tests or statistics, its funding.
Who is going to fund potentially destructive testing of N>35 of dozens of riflescope SKUs, including the human time to test, analyze, and report that data?

Probably only attainable if an entity like PewScience, ProjectFarm, C_Does, etc took this on as a business and charged consumers for the results. Would be grateful if someone did, and I might throw something like $100 at it in order to drive improvements in the riflescope industry. But how many others would?


Happy Easter
 
Fwiw, I did not read the OP and subsequent posts by the OP that he hadnt read and didnt understand the evals. There is zero question that the protocol used is insufficient to pass a “scientific” threshhold as youd find in a standardized test such as those used by ASTM, UIAA, etc—it simply isn't consistent enough. In that regard, the OP is 100% correct. I just dont think thats really the goal of the evals, or at least not the entire goal.

Some relevant questions though, are:
1). what is the actual goal of the evals? Is it to create a “industry standardized test”? Or something different?
2) if a truly standardized engineer/scientist-approved test were designed, what would us as consumer LOSE? Ie is there value in the “quick and dirty” method that would not exist in the “sciency” method with regard to shooter education, troubleshooting, etc?
3) if a perfect test were designed, who will do it, and who will pay for it, and what will ensure that manufacturers do more than ignore it? Look at the pew science tests on suppressors and the controversy that creates because its a pay to play model where the “consolidated ranking” is based on a proprietary weighting, even if the raw data is public.

So imo the “criticism” is entirely valid IF you are approaching it like an “official” standard. The problem with that is that trying to make the evals “fully scientific” and 100% controlled/quantifiable, would necessarily AUGMENT the evals, not REPLACE the evals. So if you are trying to offer actual solutions, as opposed to just trying to poke holes, you should have some actual specific solutions, you should be specific about the goal, demonstrate an understanding of the current protocol in place that attempts to address that variable, as well as address both the positives and negatives of making that change.
 
The real question is why would anyone type a thousand word essay on dropping their rifle, then type another ten thousand words after doing it again with a different scope.
 
I think that the scientific methods of these tests could be improved, and that's what I want to talk about.

Almost all durability testing we have is from end users or in this case Form posting on the net. It’s doable, and could be improved into a protocol to produce a data set.

There are issues with the evals, and I reached out privately to Rokslide for improvements. The response was essentially, "There's only one test, the Rokslide way".

That response is totally cool with me, and out of courtesy to the owners here I try not to point out the flaws. Or suggest improvements.

FYI - I didn't read every single comment in this thread in detail, but my background is in research, testing, and product dev.

You mentioned 541. Are you in the PNW?
 
@Formidilosus earlier in the thread someone mentioned you have access to machining at SRS and UM. Which makes me wonder. How much would a device that drops scopes more uniformly onto uniform surfaces cost to make? And then the cost of a colimator? Would those items be cheaper than the amount you guys are spending on ammo? This isn’t a knock on the current field evals.
 
Back
Top