Questioning the "gold Standard Drop Test" and the conclusions of "This scope brand does/doesn't hold zero"

14 pages of nothingness . The op is a great example of over thinking . Even if all the data was published it would only affect a small percentage of buyers . The people in the industry that actually shoot know what scopes work and which scopes do not . The rest are clueless and all the info in the world will not matter to them
Welcome to my brain. It’s done me decent though, so far. Seems this rabbit hole we're in hosts many like me. Haven't you been here just about the whole time?

True, but it affects us; I appreciate the data that is here.

A few years ago, I was dialing in a rifle on some open land range when a guy pulled up in his truck, jumped out with his .270 and asked if he could shoot at my target. I welcomed him and said go for it.. he took two shots at 100 yds from standing, exclaimed “that’ll do” (I was spotting and have no idea where his shots landed) then explained that he was "just making sure his rifle was still on for his elk hunt in the coming week", boasted how great his rifle was, jumped back in his truck and took off. This sort of pre-hunt analysis/gear checking worked for him, and would for many others out there. Don't think that's for us.

Like any tests and data, it's not about the test as much as how one interprets and uses the data, and most importantly, understands its limitations.
And no, afaict no manufacturer has “gamed” this test yet. They would have to acknowledge the existance of this test and design around it in a way that DOESNT actually result in a more reliable product in order to do so, and that definitely hasnt happened.
You include the caveot DOESNT result in a more reliable product (which I included in my statement as well), but didn't Maven do exactly that with the RS 1.2 prototype/production run. Yes, it resulted in a more reliable optic. Why do we assume if there was a different test, that a manufacturer would design an optic to pass said test (game it) but would not affect/improve end user reliability? Logical phallacy to just assume/suppose it wouldn't, as long as the test was a reasonable one to simulate field use. This has gone back and forth, or been a cyclical argument throughout the thread, just have trouble with arguments that assume if different, results will be manipulated. The rest of your post is great.


As Aussie and many others have suggested, an industry standard is necessary that does expose and ultimately push manufacturers to correct weaknesses.

One of the biggest benefits IMO about Forms work here, aside from the information he provides us, is the info/standard that is sent back to the manufacturers. Like another suggested, companies have enlisted his input (afaik/hence Maven RS 1.2) and hopefully, we develop an improvement in field durability via industry standard. Is Leupold/Vortex paying attention? IDK. But, we know some are. I appreciate that.
 
You include the caveot DOESNT result in a more reliable product (which I included in my statement as well), but didn't Maven do exactly that with the RS 1.2 prototype/production run. Yes, it resulted in a more reliable optic. Why do we assume if there was a different test, that a manufacturer would design an optic to pass said test (game it) but would not affect/improve end user reliability? Logical phallacy to just assume/suppose it wouldn't, as long as the test was a reasonable one to simulate field use. This has gone back and forth, or been a cyclical argument throughout the thread, just have trouble with arguments that assume if different, results will be manipulated. The rest of your post is great.
Right, thats the point—if it resulted in a more reliable optic it isnt what Im worried about. Thats why I believe we’re talking about two different things. Or at least talking past each other. A manufacturer making a product more reliable based on a test is entirely possible. The consumer benefits, the manufacturer can market based on that, everyone wins. Thats the goal. Maven claimed they didnt do that, so I dont think they are an example unless they are outright lying. But regardless, that is NOT what I mean by gaming a test.

I’ve already provided multiple real examples of industry tests that are routinely gamed—ie the product was designed at least in part around the test, not around reality, and as a result the consumer does not see the full expected benefit. This could be accidental or intentional. Can you name one standardized test that DOESNT sometimes get gamed by manufacturers? I cant. If you are looking for true improvement for the user—not just a rubber stamp for the marketing dept or a CYA policy for the industry to avoid lawsuits—do you think its better to assume a test specific-enough to qualify as a “controlled experiment” WONT be gamed at least to a degree?
 
I believe we’re talking about two different things. Or at least talking past each other. A manufacturer making a product more reliable based on a test is entirely possible. The consumer benefits, the manufacturer can market based on that, everyone wins. Thats the goal. But that is NOT what I mean by gaming a test. Gaming a test is not guaranteed. It might not happen. It’s just that in every single test I have personally been involved with or had any visibility on, it DOES happen. I already provided multiple examples of that—those are not fictitious examples, those are 100% real. So while I dont think thats a reason not to do a test, I do think it is naive to think that it wouldnt happen, and I point that out only to say that an “industry test” is not necessarily a panacea or a complete solution. I am simply saying that if today’s eval has shortcomings, fine—just be aware that whatever you replace it with will also have a different set of shortcomings, regardless of how “scientific” it is. That is all Im saying, that you trade one set of issues for a different set, and to go into that eyes wide open.
Indeed, I think we are (saying the same thing and talking past each other) and it seems we’re 95% in agreement, just presenting little nuances of the argument about designing around a test. I appreciated your examples, I think they’re spot on, but to present the flipside of your argument, while companies have gamed the EPA fuel estimation tests and manipulated their studies to boost their numbers I don’t think the argument (which I don’t think you’re making necessarily) that the EPA testing requirements haven’t improved fuel efficiency overall holds any water. Ultimately, EPA designed an industry standard test with reportable findings, and manufacturers have engineered to that test. Of course, they want to boast unrealistic numbers, but that doesn’t mean that the vehicles they’ve created aren’t more fuel efficient.
 
Agreed were mostly in agreement. Agreed the epa test overall has resulted in better fuel economy over time. But we arent arguing test vs no test, we’re arguing a “dirty” but more realistic (by virtue of incorporating all the elements present in reality) “test”, versus a “scientific” but sterile test that doesnt necessarily reflect reality. All Im saying is that if you are going to diss fuelly.com or similar crowd-sourced informal info sources because they arent controlled, and argue for epa as “better”…to just be aware of what you lose by going to epa and be clear-eyed about the value of what you choose to give up.

Also, in this case the replacement “test” doesnt exist. So to me its equally valid and important to point to potential problems with it, as it is to argue for it, until the specifics are defined.
 
I've always been amused variations of the quote "All science is either physics or stamp collecting".

Similar to what others have said in this thread, "scientific" can be interpreted differently. I work in geological engineering, and am familiar with using limited and imperfect data, which is sometimes all you have but you have to help a client make a decision. The overall analysis is considered "scientific" for its intended purpose, and caveats need to be made. As part of a big picture geological interpretation, I've seen well drillers logs used along with borehole logs prepared using standard methods in the engineering consulting industry. But even those industry standard methods are pretty crude in comparison what a physicist or electrical engineer would need for their work.

Outside my field, but from what I've heard from others, biologists will use information from Christmas Bird Counts or hunter surveys. It's not perfect, but it tells you something, especially over time.

So, I like to think of the Rokslide field evaluations as something more like geology or field biology scientific, maybe not physics scientific. It's still information that you can use given limited time and money, and it's worthy of respect for what it is.
 
Are you saying that doctors aren’t scientists?

If so, who has produced all the clinical data, randomized controlled trials, compiled the evidence that shapes current medical practices ie evidence based medicine? If not science, by what process?
If you don’t think doctors are scientists, then you don’t at all understand medicine.
Again, while no one here will likely accept this, it isn’t about my ego, rather to frame the way I evaluate and critique tests and protocols and introduce this discussion as an (attempted) collegial and respectful format such as a journal club. You guys are interpreting it as ego boosting.
I haven’t yet made it to the end, but i think this post maybe I may offer a perspective. An analogy….

As one who regularly attended and presented a physician journal club in my area - what if we learned of your “tearing apart, conclusions, etc” of said published trails (ie procedures, medications, etc). We liked the initiative but really think you’re coming up short with how many trails you’re reviewing and lack consistency of criteria in several area, and….

Do you see the problem? Our efforts are misplaced right… if we too take issue with X company’s trail then our beef and feedback AND lack of compliance or use may reflect that distain. To steer our desire for fixing something to your journal club after you took initiative for what a company hasn’t is passive aggressive and I’ll placed. Both of our efforts must be aggressively aimed at the entity of said trail(s).

Your post in my opinion would make way better sense and demonstrate a very savvy level of influence if sent to the handful of scope makers
Where you have product interests but they simply have no objective data like you’ve read on rokslide. What can you send me, that isn’t marketing based on these 4 things that are of interest for my use as a hunter or competitive shooter.

If you’ve read enough here, you would have known that Form was motivated by the lack of meaningful design and testing (long ago) (read not colminator since most do) that gives predictable performance on zero retention over time, for example.

I for one am more than thankful Form has had this mission for a long time and Ryan for helping make this happen because so many folks here did not believe him. This is a better time to have a beef as consumers with the manufacturers in this industry and stop shooting the messenger! If you’ve read enough or anyone else thinks this burden of better drop tests is Form or Ryan’s burden then open your wallet. OP I know you got big bucks…😊
 
I haven’t yet made it to the end, but i think this post maybe I may offer a perspective. An analogy….

As one who regularly attended and presented a physician journal club in my area - what if we learned of your “tearing apart, conclusions, etc” of said published trails (ie procedures, medications, etc). We liked the initiative but really think you’re coming up short with how many trails you’re reviewing and lack consistency of criteria in several area, and….

Do you see the problem? Our efforts are misplaced right… if we too take issue with X company’s trail then our beef and feedback AND lack of compliance or use may reflect that distain. To steer our desire for fixing something to your journal club after you took initiative for what a company hasn’t is passive aggressive and I’ll placed. Both of our efforts must be aggressively aimed at the entity of said trail(s).

Your post in my opinion would make way better sense and demonstrate a very savvy level of influence if sent to the handful of scope makers
Where you have product interests but they simply have no objective data like you’ve read on rokslide. What can you send me, that isn’t marketing based on these 4 things that are of interest for my use as a hunter or competitive shooter.

If you’ve read enough here, you would have known that Form was motivated by the lack of meaningful design and testing (long ago) (read not colminator since most do) that gives predictable performance on zero retention over time, for example.

I for one am more than thankful Form has had this mission for a long time and Ryan for helping make this happen because so many folks here did not believe him. This is a better time to have a beef as consumers with the manufacturers in this industry and stop shooting the messenger! If you’ve read enough or anyone else thinks this burden of better drop tests is Form or Ryan’s burden then open your wallet. OP I know you got big bucks…😊
I’ll just say… keep reading. I think we arrive here by page 14. 😆
 
That is fine. Stay home when you have a problem. Do life natural, don't get weak kneed and ask for help or comfort from the medical community, not for you and not for those you love.

If you do end up lacking the courage to stand by your principles when the metal meets the meat, I hope you or yours have an easily curable issue and an outstanding care team. Eventually the uncurable comes for us all though.

P.S. His bringing up his profession rubbed me wrong as well.

Agree! People always show up for the medicine when whatever distrust of the month they’re following doesn’t work out. In the context of the OP’s commentary I thought mentioning his profession and journal club made sense though. It establishes understanding of the scientific method. I think the issue is that the scientific method of controlling as many variables as possible doesn’t necessarily work or make sense in the context of having a test easily repeatable by others of their rifle/optic as a whole. The test as used for “scope drop tests” isolates rifle/mounting by using proven platforms and then double checking mounting etc if a scope fails initially. That test is very useful for scope shopping- certainly puts the odds in your favor. The test to be used by the end user of their complete system tests not only that particular scope, but the entire rifle.


Sent from my iPhone using Tapatalk
 
This thread has been....something.

Doctors are people, not gods. This is one of Form's best contributions to this space... straight data, zero appeal to authority, don't trust me, test it yourself to verify.

Personally, I find the test perfect. Variables are removed and/or noted with insignificance to results. This has been beaten to death by many more eloquent than I.

What I see needed to improving the test (evaluation) has nothing to do with the testing. The "evaluations" need better marketing, imho, not to improve the data but for broader acceptance among shooters and to put more pressure on manufactures.

Scopes being a "fail" or "pass" with the drop test is very misleading to people not fully educated on the process and actually is very inaccurate while entirely accurate hence the vitreal against "those retards over there on Rock Slide throwing their rifles on the ground." All scopes work. I will say that again. All scopes work. This includes something as rudamentary as a paper towel tube taped to a rifle. Can you look through it and hit the side of barn at 100 yards. The answer is yes. If you drop test the rifle and the carboard tube flattens, can you still look through it and hit the barn, Yes. Defining level of acceptance for a broad audence of scope owners/makers is important. I help shooters, almost everytime I am at the range if they seem open to learing or curious about my equipement. It is very normal to shoot 8-10 inch "groups" with an optic at 100yrds or iron sights at 50 yrds. Most people are inherently not good shooters. Period. Shooting is a perishable skill that takes knowledge and practice. Someone earlier took offense to this but they admitted to not being a good shooter, then made themselves a better shooter with Form's help. Awesome. Most hunters are succeful. Every car is a race car. Hopefully you get where I am going. People read "my scope failed" and have a hard time equating that to their successes without a measured comparison.

In my view, Form's level of accuracy is garbage.... Yep, I said it....will he be pissed? Will he flame me? Will his followers attack me? Maybe but I highly doubt it. Reason being, he quantified his expectations of accuracy for his use case. Field level, good enough for hunting (1.5moa-ish), hunting weight rifles. Now let me finish the first sentence....for accuracy based competitions such as bench rest, f-class, etc. I think most people that naysay the evaluations come from this perspective (in reverse). Forms accuracy is too demanding and unrealistic becasue I still got my deer. Forms drops are too extreme because my rifle doesn't lose zero if I don't abuse it. These people would simple have gear cassified diffrently than Form's level of backcountry bullet proof.

  • Standardizing the drop surface to something specified and available such as concrete covered in "x" brand of mat from REI would not change to correlation between passing and validity of the the test by one individual but would put data from different people testing thier own gear in a more direct comparison which woudl drastically increase the "sample size" so many are are worried about that do not understand manufacturing statistics.
May I suggest to the group...concrete. Everyone has access to it. It is "fairly" similar in properties around the world (lets not debate this). Cover it in 1.5" closed cell foam or 2 Nemo Switchback sleeping pads

  • Ranking the results better would also get more brand loyalists and brands on board with accepting the result without insulting them. This is a huge problem in high-end watches. Customer want Chronometre Trials brought back, several attempts have been made but it is super expensive and the brands pull out when their product appears hit compared to marketing. "Winners and losers' is not good for brand acceptance or consumer adoption. As the great philopher once said " If you ain't first, your last". That leave a whole lot of current owners and manufacture what will take their ball and go home. Marketing the reults into levels of durbilty (without negative terms) allows for a lot of "success". This doesn't water down the results or change the number. It allows the consumer to pick their level of use case and the manufacture to save face with existing brand position while offering the opporunity to move up a level with certain product ranges. This would help drastically with consumer adoption of the tesitng protocol.



May I suggest to the group something like...
  1. Unreliable
    1. doesn't dial accurately
      1. more than 5% error
    2. Will not maintain zero under recoil
      1. more that 2MOA deviation
    3. fails tipping over on bipod
      1. more than 2 MOA deviation
  2. Range Toy
    1. Dials accurately
      1. less than 5% error if dialable
    2. Will maintain zero under recoil
      1. Less than 2MOA deviation
    3. Passed tipping over on bipod
      1. less than 2 MOA deviation
  3. Deer Blind Appropriate
    1. Dials accurately
      1. less than 5% error
    2. Passed tipping over on bipod
      1. less than 2 MOA deviation
    3. Zero shift
  4. Weekend Warrior
    1. Dials accurately
    2. Less than 2 % error
    3. Passes tipping over on bipod
    4. Less than 1MOA deviation
    5. Passed 18" drop test
  5. Backcountry Approved
    1. Dials accurately
    2. Less than 2"% error
    3. Precision rated if less than 1%
    4. Passes Tipping over on Bidpod
    5. Less than 1MOA deviation
    6. Passed 18" drop test
    7. Less than 1MOA deviation
    8. Passed 36" drop test
    9. Less than 1MOA deviation
Just some quick examples that could be fleshed out for broader adoption so Form isn't expected to do large sample size testing of every model on the planet
 
I
This thread has been....something.

Doctors are people, not gods. This is one of Form's best contributions to this space... straight data, zero appeal to authority, don't trust me, test it yourself to verify.

Personally, I find the test perfect. Variables are removed and/or noted with insignificance to results. This has been beaten to death by many more eloquent than I.

What I see needed to improving the test (evaluation) has nothing to do with the testing. The "evaluations" need better marketing, imho, not to improve the data but for broader acceptance among shooters and to put more pressure on manufactures.

Scopes being a "fail" or "pass" with the drop test is very misleading to people not fully educated on the process and actually is very inaccurate while entirely accurate hence the vitreal against "those retards over there on Rock Slide throwing their rifles on the ground." All scopes work. I will say that again. All scopes work. This includes something as rudamentary as a paper towel tube taped to a rifle. Can you look through it and hit the side of barn at 100 yards. The answer is yes. If you drop test the rifle and the carboard tube flattens, can you still look through it and hit the barn, Yes. Defining level of acceptance for a broad audence of scope owners/makers is important. I help shooters, almost everytime I am at the range if they seem open to learing or curious about my equipement. It is very normal to shoot 8-10 inch "groups" with an optic at 100yrds or iron sights at 50 yrds. Most people are inherently not good shooters. Period. Shooting is a perishable skill that takes knowledge and practice. Someone earlier took offense to this but they admitted to not being a good shooter, then made themselves a better shooter with Form's help. Awesome. Most hunters are succeful. Every car is a race car. Hopefully you get where I am going. People read "my scope failed" and have a hard time equating that to their successes without a measured comparison.

In my view, Form's level of accuracy is garbage.... Yep, I said it....will he be pissed? Will he flame me? Will his followers attack me? Maybe but I highly doubt it. Reason being, he quantified his expectations of accuracy for his use case. Field level, good enough for hunting (1.5moa-ish), hunting weight rifles. Now let me finish the first sentence....for accuracy based competitions such as bench rest, f-class, etc. I think most people that naysay the evaluations come from this perspective (in reverse). Forms accuracy is too demanding and unrealistic becasue I still got my deer. Forms drops are too extreme because my rifle doesn't lose zero if I don't abuse it. These people would simple have gear cassified diffrently than Form's level of backcountry bullet proof.

  • Standardizing the drop surface to something specified and available such as concrete covered in "x" brand of mat from REI would not change to correlation between passing and validity of the the test by one individual but would put data from different people testing thier own gear in a more direct comparison which woudl drastically increase the "sample size" so many are are worried about that do not understand manufacturing statistics.
May I suggest to the group...concrete. Everyone has access to it. It is "fairly" similar in properties around the world (lets not debate this). Cover it in 1.5" closed cell foam or 2 Nemo Switchback sleeping pads

  • Ranking the results better would also get more brand loyalists and brands on board with accepting the result without insulting them. This is a huge problem in high-end watches. Customer want Chronometre Trials brought back, several attempts have been made but it is super expensive and the brands pull out when their product appears hit compared to marketing. "Winners and losers' is not good for brand acceptance or consumer adoption. As the great philopher once said " If you ain't first, your last". That leave a whole lot of current owners and manufacture what will take their ball and go home. Marketing the reults into levels of durbilty (without negative terms) allows for a lot of "success". This doesn't water down the results or change the number. It allows the consumer to pick their level of use case and the manufacture to save face with existing brand position while offering the opporunity to move up a level with certain product ranges. This would help drastically with consumer adoption of the tesitng protocol.



May I suggest to the group something like...
  1. Unreliable
    1. doesn't dial accurately
      1. more than 5% error
    2. Will not maintain zero under recoil
      1. more that 2MOA deviation
    3. fails tipping over on bipod
      1. more than 2 MOA deviation
    4. Dials accurately
      1. less than 5% error if dialable
    5. Will maintain zero under recoil
      1. Less than 2MOA deviation
    6. Passed tipping over on bipod
      1. less than 2 MOA deviation
Just some quick examples that could be fleshed out for broader adoption so Form isn't expected to do large sample size testing of every model on the planet
I could get behind your example ratings.
I think something like that would help.
 
I’d amplify one aspect to doubtful hunters. An education or brick in the head…

The most important function of a scope is an aiming device and it must maintain zero.

2 buds I hunt with locally, where tweaking their leupold zero before season (they mock my tasco). I said, not zeroed again cause it was same thing last two years. Your scopes lose zero in a gun cabinet. How do you know, it will maintain zero next week? Then I just pushed mine over onto the ground without a mat on top turret loaded and fired a bullseye. I stated the above and said try it with yours now you have it zeroed.

Neither did, but it hit them cause they zipped the chatter. One has swfa now…
 
I’d amplify one aspect to doubtful hunters. An education or brick in the head…

The most important function of a scope is an aiming device and it must maintain zero.

2 buds I hunt with locally, where tweaking their leupold zero before season (they mock my tasco). I said, not zeroed again cause it was same thing last two years. Your scopes lose zero in a gun cabinet. How do you know, it will maintain zero next week? Then I just pushed mine over onto the ground without a mat on top turret loaded and fired a bullseye. I stated the above and said try it with yours now you have it zeroed.

Neither did, but it hit them cause they zipped the chatter. One has swfa now…
 

Attachments

  • IMG_0305.jpeg
    IMG_0305.jpeg
    388.2 KB · Views: 23
You must be clairvoyant or a remarkable person that from a post you can draw that conclusion and judgment.

I’d prefer you to contribute then to antagonize without any meaningful contribution
Well if they have people going around purposely pushing their loaded rifle on the ground to prove a point, think it’s possible your buddies rings could be loose or some other issue with their rifles
 
Well if they have people going around purposely pushing their loaded rifle on the ground to prove a point, think it’s possible your buddies rings could be loose or some other issue with their rifles
My rifle wasn’t loaded. A tip over from leaning happens all the time and should be benign. Appreciate the meaningful counter.

I think you made the point^. Some scopes aren’t holding zero but the only real way to know is solid mounting procedures. And the shooter can shoot. It seems nay sayers and sheep calling out the sheep cult fail to recognize the written protocol detailed mounting one consistent rifle for all tests with documented precision with 3 types of ammo I believe. I also believe the shooter is beyond exceptional. So much focus and distain is focused on the drops and the why would you.

The strength of this test is the protocol for one consistent platform and one shooter to isolate and demonstrate a weak link. Shifts have occurred from drops, and from riding on rough roads after passing drops.

So one of the buds was heavy equipment mechanic for much of his career. I never tell them or pass judgment, but I have asked questions as a friend. Such as, what are you torqued on x y z. “Shraggs I turned wrenches for 20 years I know what I’m doing and don’t need any torque wrench”. He shoots mostly high recoil guns, refuses to shoot prone versus a bench but turned to a sled instead of critically assessing his set up or scope - it just saves time and does not affect his ability or a different poi…. I’m confident most of his scopes are fine, but not his platform.

He had rotator surgery so he’s watched the recoil lately. But I find it fascinating after being with him in 3 kills with his crossbow at 70 ish yards, two free hand. I did say once you shhot better with that than your guns, kinda pondered and then said but how can I go to Kansas and kill deer without out my 7 mag… he’s a smart guy! So I pointed out that he’s taken 2 deer without his new 350L at similar distances. I think he’s processing.

Just wanted to keep this real life.

It’s been mentioned on this thread to test your sh-t regardless of your scope brand or know your doing it right - pretty hard to imagine the sheep designating when that is the core point.
 
Back
Top