Questioning the "gold Standard Drop Test" and the conclusions of "This scope brand does/doesn't hold zero"

What @Q_Sertorius said is right on. Think of the evals as a first year weed out class, not a board exam. If you want to create a board exam, that's an entirely different affair and we should not think of it as an "improvement" of the evals. It serves its purpose extremely well, and if boards are needed it's in addition to the weed out class rather than instead of.
 
What @Q_Sertorius said is right on. Think of the evals as a first year weed out class, not a board exam. If you want to create a board exam, that's an entirely different affair and we should not think of it as an "improvement" of the evals. It serves its purpose extremely well, and if boards are needed it's in addition to the weed out class rather than I stead of.
Realizing it’s limited, that’s it’s neither sensitive nor specific. Are ‘bad’ making it through the test as passed that didn’t experience the same forces? Idk… but I see your point. It is effective to weed out those that don’t hold zero. I did make that point perfectly in the OP. You clarified the significance.
 
Wouldn’t it be weird if someone came on here and stated that he and his buddies sit around a fire critiquing medical research papers without stating they are all medical doctors? Do you require all medical professionals to state their COVID beliefs prior to talking to them? It’s sad that these days when someone simply states they are a doctor triggers us enough to completely discount their opinion.
I think this is a good reason Form keeps his professional career and his identity a secret. I appreciate this about him. He doesn't start off saying "I'm this profession or that". He lets the information and facts qualify his findings,, not his profession. I think doctors are so used to people just believing them for being doctors that they have the need to tell everyone.

That being said, I am not going to say what I do for a living.... :ROFLMAO:
 
Almost all durability testing we have is from end users or in this case Form posting on the net. It’s doable, and could be improved into a protocol to produce a data set.
All the facts in the world will not make a impact on sales , most "shooters" / weekend warriors will not do the research , they will buy what their friend has or what is on sale . Or buy a product for the "great" warranty .
And 90 percent of people shooting are not good enough to tell if their scope holds zero or not .
Facts don't matter in retail sales and only a few will take the time to use the data if it was published , so who would pay for it ?
In our capitalistic society cash is king , not facts. IMO
 
You read per my intention. Thank you.

The thing is, I really value this test, however, I don’t exactly know how to use the results, or how strongly I should consider them in a purchase, with confidence or not (when many accept as gospel).. Most of the dialogue in this thread is truly helpful here, and as intended, is bringing up counter points of validity and other perspectives to consider. That is exactly what was intended.
Not everything needs to be tested in a proverbial vacuum. Hunting and shooting certainly does not happen in a vacuum.

What you can see if you read through the evaluations is a trend. Some manufacturers across varying models tend to show consistent results of “failures”. Some tend to show consistent results of reliability.

The evaluator has mentioned multiple times that he wished that Leupold for example would consistently show reliable outcomes in the evaluations because they otherwise have very desirable features (I’m paraphrasing, not quoting).
 
Realizing it’s limited, that’s it’s neither sensitive nor specific. Are ‘bad’ making it through the test as passed that didn’t experience the same forces? Idk… but I see your point. It is effective to weed out those that don’t hold zero. I did make that point perfectly in the OP. You clarified the significance.
In the experience of many people (including myself) it has good sensitivity and specificity.

Lets be extra critical and say it is 50% sensitive for finding X brand/model that looses zero and it is 80% specific for finding those that don't with specificity jumping to 90% if two samples are tested.

How would you use such a test diagnostically? How do you fit it in to your mental framework?

Don't forget, theory often disagrees with fact. The first thing for any tool is validation. As the only available test is the drop test, you have to design a validation trial to prove what you come up with is better protocol. Just because it is more complex and sterile, does not mean it is better. My suspicion is getting meaningful improvements in sensitivity and specificity is going to be much harder than the critics believe. Just like you don't need a CXR/lung POCUS/CT to diagnose a massive tension pneumo with enough confidence to intervene.
 
On the subject of recoil simulators - I seem to recall Leupold heavily advertising their scopes ruggedness as it pertains to recoil. Has anybody ever had an issue from one strictly from recoil? I haven't. Yet they are consistently bad when subjected to different forces.

Sure it's a component of overall reliability, but vortex investing a recoil simulator isn't earth shattering news. Holding up to recoil is the absolute bare minimum requirement of a centerfire rifle scope.
 
@QuikFire it’s ironic that you started your initial post stating you didn’t want to cause an issue and then snap back at anyone who says anything you don’t like. Maybe that’s just a doctor thing

I am in the scientific field, as an engineer it is clearly understood that probability underpins any experimental data or study done. Actually the modern world is built on probability + factor of safety. It is possible to produce results that are very repeatable but you need to increase sample size and refine methods, as you said. By doing this you are also stating that hunting takes place in a vacuum. I understand how a surgical perspective on the world would lead someone to “want” this to be true. Unfortunately it’s not true.

If someone is willing to perform the repeatable drop test and publish clear results. As an engineer I am going to use that information as the best available at the time of decision and say thank you.

If the results of my decision are poor, I iterate. Just like the rest of the modern world.


Sent from my iPhone using Tapatalk
I’m not sure I understand what you mean by “snap back”, so I don’t appreciate any irony. Aside from the personal attacks to me and my profession, I think we have a pretty healthy, friendly, educated and intellectual dialogue going here, your input included.

I don’t think at all that hunting happens in a vacuum, but I do believe the testing a component could be standardized, and as was the thesis of my statement in the OP, some sampling error minimized in methods of perhaps, a subsequent durability/zero hold test.

I appreciate Forms contribution, appreciate the many counterpoints to its validity and usefulness but still question the interpretations of many around the forum as gospel.
 
Also to add, if I were form I'd have given up years ago on convincing the masses that their gear choices were shit when subjected to real adverse conditions. Props to that dude for continuing to toss expensive stuff around and then recording / sharing the results. That dude's ammo bill has to be absurd.

We shouldn't be doing these tests to our own equipment. The manufacturers should. Convincing the market that it's something we actually need (even though a lot of us don't know we do) is where the best work is being done right now. Pretty cool to see a few hunters manipulating the market for the better.

As far actual scientific tests of better quality than they are doing, it certainly COULD be done. It would require hundreds of scopes, expensive equipment (accelerometers, data processors, people to digest it all, etc). Again, Form is just a dude throwing his rifles in the dirt or freezing them in the snow and seeing what happens. In a way, those kinds of tests are more valuable than the pure data-driven results of super controlled tests. When I design something new for work, the first thing I usually do is bring it to one of the mechanics to look at. The first thing they often do is intentionally drop it on the concrete and then either make an impressed face when it doesn't break, or hand it back broken and walk off. Those guys have saved our company an untold amount of money with the simple ol drop test.
 
Maybe he brought his profession up because he thinks it’s the paragon of impartial review… or because he thinks that most hunting and shooting is done by uneducated people he considers inferior.

You can minimize all sorts of stuff if you try hard enough.

Somebody with four years of undergrad, followed by several years of medical school, residency, and then practice surely can understand that statistical significant of a failure on the drop test is significantly more impactful than a pass. Icing on top is that real world experience corroborates the drop test results among lots of people in lots of places with lots of rounds.
 
he testing a component could be standardized, and as was the thesis of my statement in the OP, some sampling error minimized in methods of perhaps, a subsequent durability/zero hold test.

Again, just a dude throwing his rifle on the ground and recording/sharing what happens. There are some trends to see there, but you can easily do it to your own equipment, and you should. I don't think they've ever declared definitively that all scopes from brand X are great and all from brand Y are shit based on their very limited testing.

It's really important that it be taken at face value. There are a few essential, very simple tests that things undergo in their usage, intentional or not. The drop test (mechanical things), the pull test (wires), the button smasher (electronics), etc. You get the point.

I'd rather see them carry on with the service they are providing us now rather than try to improve the quality of the test. That takes money and time, and I personally aint paying a nickel for the information I get from those tests. Improving the quality of results can be absurdly expensive, even for something as simple as whether or not a scope's guts rattle around when dropped.
 
Man overboard! I believe we are over thinking this. LOL.

Maybe he brought his profession up because he thinks it’s the paragon of impartial review… or because he thinks that most hunting and shooting is done by uneducated people he considers inferior.

You can minimize all sorts of stuff if you try hard enough.

Somebody with four years of undergrad, followed by several years of medical school, residency, and then practice surely can understand that statistical significant of a failure on the drop test is significantly more impactful than a pass. Icing on top is that real world experience corroborates the drop test results among lots of people in lots of places with lots of rounds.
And that’s the rub, with all of my education and experience, I realize that there is no statistical significance in either.
 
Back
Top