Questioning the "gold Standard Drop Test" and the conclusions of "This scope brand does/doesn't hold zero"

All it takes is -

1. Think you know how to shoot
2. Read about the Kool aid
3. Question the Kool aid
4. Try the Kool aid
5. Ahh now I know how to shoot
6. Realize you've sucked the whole time and start from 0, but now with repeatable equipment and processes from the Kool aid.

You seam to be at step 3
 
I do get what you're saying. But I'm trying to convey you're asking someone who is spending countless hours (for free) "hey, if you can spend some more time and money to appease me and others who question your methods...."

I own some scopes that have "passed" the tests, some that haven't been tested, some Arkens, whatever.

I never said they're the "be all, end all" of scope testing, but unfortunately, they're what we have and I appreciate the time and money that have gone into them. It's more than the rest of the industry is doing.
I too see your point, which in reflection, was not my intention, exactly... but I see it.

I agree, it's the best collective, standardized reference we have available afaik. I appreciate the effort for that, I'm using it myself right now in search of a new optic - why I'm here. Just not sure how to interpret the results. But hey, this discussion is helping... aside from this new guilt and shame of being a surgeon that I carry, tied in with the responsibility of the COVID MRNA vaccine.
 
Did you scientifically prove that exact person was going to die or did you have to go off of previous evidence of conditions that were similar but not "exactly" the same?
They were dead. until I revived them with a thoracotomy, cross clamped their aorta, pumped their flaccid empty heart with my hand, shot intracardiac epi, resuscitated them with MTP, then took them to the OR to control their hemorrhage.
 
Don’t quite understand why you have to go after a guy’s profession. Your post is the equivalent of an animals rights activist saying all hunters are poachers because some hunters poach….
His profession was held out as qualification to impress us…or as a reason to trust his science-based conclusions.
Unless he is willing to distance himself from the people like Fauci who lied to us…then I’ll stand by comments.
I didn’t bring up his profession. He did.
 

They were dead. until I revived them with a thoracotomy, cross clamped their aorta, pumped their flaccid empty heart with my hand, shot intracardiac epi, resuscitated them with MTP, then took them to the OR to control their hemorrhage.
And? That makes you a shooting expert how? I’m out. You are full of yourself.
 
His profession was held out as qualification to impress us…or as a reason to trust his science-based conclusions.
Unless he is willing to distance himself from the people like Fauci who lied to us…then I’ll stand by comments.
I didn’t bring up his profession. He did.
I've never met Fauci, I'm not an epidemiologist and I don't know a whole lot about vaccine science.

Covid happened as I was a surgical resident. All i know, COVID was bad, okay. Very ******* bad. Every hospitals ICU was.. Idk, apocalyptic.
I've never seen so many otherwise healthy, relatively young people die of acute respiratory failure. I became a chest tube specialist for the medical ICU, placing chest tubes left and right for people popping their lungs from diseased wet tissue paper lungs on a ventilator. I was exposed to that shit night and day. I got COVID more times than I can ******* count, and all the while, I don't even think about Fauci or if he lied to us, because I don't care - it doesn't matter to me. That's not what I do.
 
@QuikFire
A single scope tested is an N number of 1. This is then, is not a study but an anecdote. An N of one with a suspect, variable/inconsistent study method is, to that scope manufacturer, quite an injustice.

On your point about n, it’s worth considering that p(lemon) << p(average scope). Viewing this through a Bayesian lens makes an n of 1 much more informative. And if it truly is a lemon, it calls the manufacturer’s QC into question.

Another idea: the goal is not to find mean time or force to failure or learn a distribution. It is an environmental stress test. A failure carries more weight than a pass with this sort of censored data.
 
I want to start off with: By calling into question the beloved drop test, I'm not (intentionally) trolling here. With that said, I do love the idea behind this test, an attempt toward an objective, scientific means of assessing the durability of a scope going through a field test to do its job of staying on target.
It does help us as consumers with information to guide our gear selection, but in thinking critically about it, there is room for error within. This contemplation was triggered by a good friend's die hard advocacy, and justification of his purchase of scope brand X, cuz at the end of a long justification... "it held up to the drop test".

I think that the scientific methods of these tests could be improved, and that's what I want to talk about.

First, I am by trade and training, somewhat of a scientist. I'm not the lab tech guy in the white lab coat and googles, but I am a doctor, a surgeon, and read and critique scientific papers to evaluate the published studies of our profession. We have a little monthly tradition called 'Journal Club' where we sit around, let the libations flow, and discuss a latest medical paper. We praise it for it's strengths and contributions to medical/surgical care, then rip it apart for all of it's weaknesses in methods, irrepresentative study population and poor design. So, without further ado, crack a cold one and let's dice apart these methods and how they might be improved.

First, strengths; well... It accurately exposes the ones that don't hold zero. We have a 100% true positive here - the ones that are dropped, are not holding zero; That scope, individually, failed.
However...
The ones that do hold zero; is it then a quality scope? That is what we assume. But, did it get dropped on the same impact point, with the same impact force, same system weight to equilibrate momentum, etc, as the other that didn't hold? Dropping a rifle onto matted, tarped, variable surfaces, lets in a lot of room for variability between drops. Ie. This test is not truly repeatable. Without repeatability and consistency between POI, Force, Momentum, I don't think there's an argument that says you've effectively included all scopes that do not hold zero up to X amount of force. There are perhaps scopes that don't hold zero, that passed the test.

I was reading another scope review, I think a Maven RS with a test I really appreciated; can't remember where it was, but they dropped a stated weight (28 oz iirc) hammer onto the turret, front housing, rear objective, focus/paralax adjustment, etc, with a consistent pendulum drop, creating rather repeatable force/momentum onto the rifle/scope system. I though this was more repeatable, constant designed test than the dropped gun onto matted ground.

The results of "holding zero": So, most of these scopes that "pass" still have some variance off of true zero. So, rather than a binary yes/no, there seems more appropriate to have a value or degree of variance. Why not measure the degree off zero after x ft lbs of impact to these specific points? We could also find what force is required to put a given scope off zero. I'm certain every scope has it's breaking point.

The N. The "N" refers to number of subjects N is the biggest factor to consider in study design. It amalgamates group data to represent an outcome from enough subjects to detect a difference. A single scope tested is an N number of 1. This is then, is not a study but an anecdote. An N of one with a suspect, variable/inconsistent study method is, to that scope manufacturer, quite an injustice. Drawing your conclusion that X brand's Y line of scopes does or doesn't hold zero, says a lot about X brand, and definitely influences a lot of consumers. Just looking at the various scope review threads here, they are in the several thousands, even for the more obscure ones. So, my point is that these statements of "not holding zero" while for that individual scope is true, it may not accurately reflect the 'average' quality of that optic line. Equally, because a scope passed a drop test, given aforementioned variability, it may overstate that individual optic line's quality/durability, and overstate the 'average' quality of that brand. I have to assume that every scope brand produces a few lemons in their lineup. An N of 1 doesn't speak to every scope, or even the average. Get my drift? We need an N greater than 1.

Any archery hunter's watch Lusk Archery Adventures broadhead reviews on Youtube? Those are great methods. Virtually every test is standardized, repeatable, with minimal chance for error or variability. The data he collects is quantitiative rather than binary. I know, broadheads are much easier and cheaper to test than riflescopes, but it's a good example of rather repeatible, scientific testing and data collection.

While I can appreciate the anecdotes and the spirit/intent of these field tests as really solid information to help guid gear selection. As to the drop test, it may be representative all the while they may not, and I wouldn't put 100% stock in them.

That's about all I've got for this month's Journal Club.
Real simple, you pay for better information, or demand the manufacturer pays for better information.

For the record, you are the only person I have seen/heard call it "gold standard"; so, you have constructed a bit of a straw man that might limit your understanding.

The test has limitations, it is the best data available, and I can implement it on my own gear and have had repeatable results in that context.

Dropping a rifle is far more generalizable than a pendulum hammer striking the scope. The reason for this should be obvious, if not, then dive into the Snell helmet testing and contemplate why the helmet is dropped. Then look at military and SAAMI rifle drop test and contemplate why a drop was chosen.

A drop is the correct format for the test. You could certainly standardize components of the test, something closer to how Snell does it. But, money becomes an issue and to me a test I can do on my own gear has more utility.
 
Honestly, anything coming from someone in the medical profession right now sticks in my craw. Scentific conclusions? Like that whole Covid origin and MRNA vaccine safety BS pushed by the whole medical community?
I’m sorry but anything from a doctor these days doesn’t impress me much.
No credibility.
That is fine. Stay home when you have a problem. Do life natural, don't get weak kneed and ask for help or comfort from the medical community, not for you and not for those you love.

If you do end up lacking the courage to stand by your principles when the metal meets the meat, I hope you or yours have an easily curable issue and an outstanding care team. Eventually the uncurable comes for us all though.

P.S. His bringing up his profession rubbed me wrong as well.
 
Passing someone else's drop test is not a reason to use something, but can help you pick a good starting point. Passing your own drop tests is reason to stick with something. Failing someone else's drop test when their system has proven capable of holding zero with another scope, is reason to not use that scope.

When you take the time to understand manufacturing processes and overlay that with t-tests it becomes pretty obvious that any scope losing zero in the drop tests is probably performing as designed. You can't shoot a full group after each shot because these scopes with drifting zeros tend to reset themselves during recoil.

How many faulty orthopedic implants would you personally need to see to stop using that brand?
 
You don't understand what the 'drop test' actually is, and that's a common situation with people who disagree with it. Form calls it a field evaluation, not a test. It's straightforward, fairly repeatable, and extremely helpful. Really, it just sounds like you would like to significantly increase the complexity of the 'drop test'. It doesn't need to be any more complicated, it's plenty effective already. Intelligence is making complicated things simple, not the other way around.
 
I've never met Fauci, I'm not an epidemiologist and I don't know a whole lot about vaccine science.

Covid happened as I was a surgical resident. All i know, COVID was bad, okay. Very ******* bad. Every hospitals ICU was.. Idk, apocalyptic.
I've never seen so many otherwise healthy, relatively young people die of acute respiratory failure. I became a chest tube specialist for the medical ICU, placing chest tubes left and right for people popping their lungs from diseased wet tissue paper lungs on a ventilator. I was exposed to that shit night and day. I got COVID more times than I can ******* count, and all the while, I don't even think about Fauci or if he lied to us, because I don't care - it doesn't matter to me. That's not what I do.
Says alot about you. I care because the Covid fraud ruined people's lives, careers, and generally our way of life. The fraud called a "vaccine" was a joke and every medical professional worth his salt knows that now and calls it such. The "vaccine", and ventilator protocol killed more people than covid did. There's a big difference of dying "with" covid and dying "from" covid. There's a really good reason it's called "practicing medicine".
 
I remember.
“Stay home until you can’t breathe “
“Don’t take ivermectin, hydro chloroquine (sp) or any steroids. Just wait until you can’t breathe and go to the ER.”
This “protocol” was treated as gospel and any deviation from it was ridiculed.
Of course, we know now that it was actually a propaganda campaign to “prove” that other effective treatments were NOT available for Covid…which was a necessary condition for the experimental MRNA vaccines to be emergency approved. TRILLIONS OF DOLLARS AT STAKE.
Pretty much the entire medical community enthusiastically participated in this hoax.
Then the ONLY approved government/hospital approved treatment was Fauci’s pet drug REMDISIVIR…quickly followed by intubation.
It also was a hoax. It was a complete failure…pretty much a death sentence.
To die alone because family was not allowed.
So, am I still pissed off at the medical profession?
You bet I am.
So when a “medical professional” struts into a hunting forum spouting his scientific expertise and poo-pooing real life facts? Yea, that rubs me the wrong way.
Are there still good people in the medical field? Of course.
This isn’t a blanket condemnation.
But an arrogant surgeon spouting his scientific theories just sticks in my craw.
His ego just leaks thru it all…
 
I want to start off with: By calling into question the beloved drop test, I'm not (intentionally) trolling here. With that said, I do love the idea behind this test, an attempt toward an objective, scientific means of assessing the durability of a scope going through a field test to do its job of staying on target.
It does help us as consumers with information to guide our gear selection, but in thinking critically about it, there is room for error within. This contemplation was triggered by a good friend's die hard advocacy, and justification of his purchase of scope brand X, cuz at the end of a long justification... "it held up to the drop test".

I think that the scientific methods of these tests could be improved, and that's what I want to talk about.

First, I am by trade and training, somewhat of a scientist. I'm not the lab tech guy in the white lab coat and googles, but I am a doctor, a surgeon, and read and critique scientific papers to evaluate the published studies of our profession. We have a little monthly tradition called 'Journal Club' where we sit around, let the libations flow, and discuss a latest medical paper. We praise it for it's strengths and contributions to medical/surgical care, then rip it apart for all of it's weaknesses in methods, irrepresentative study population and poor design. So, without further ado, crack a cold one and let's dice apart these methods and how they might be improved.

First, strengths; well... It accurately exposes the ones that don't hold zero. We have a 100% true positive here - the ones that are dropped, are not holding zero; That scope, individually, failed.
However...
The ones that do hold zero; is it then a quality scope? That is what we assume. But, did it get dropped on the same impact point, with the same impact force, same system weight to equilibrate momentum, etc, as the other that didn't hold? Dropping a rifle onto matted, tarped, variable surfaces, lets in a lot of room for variability between drops. Ie. This test is not truly repeatable. Without repeatability and consistency between POI, Force, Momentum, I don't think there's an argument that says you've effectively included all scopes that do not hold zero up to X amount of force. There are perhaps scopes that don't hold zero, that passed the test.

I was reading another scope review, I think a Maven RS with a test I really appreciated; can't remember where it was, but they dropped a stated weight (28 oz iirc) hammer onto the turret, front housing, rear objective, focus/paralax adjustment, etc, with a consistent pendulum drop, creating rather repeatable force/momentum onto the rifle/scope system. I though this was more repeatable, constant designed test than the dropped gun onto matted ground.

The results of "holding zero": So, most of these scopes that "pass" still have some variance off of true zero. So, rather than a binary yes/no, there seems more appropriate to have a value or degree of variance. Why not measure the degree off zero after x ft lbs of impact to these specific points? We could also find what force is required to put a given scope off zero. I'm certain every scope has it's breaking point.

The N. The "N" refers to number of subjects N is the biggest factor to consider in study design. It amalgamates group data to represent an outcome from enough subjects to detect a difference. A single scope tested is an N number of 1. This is then, is not a study but an anecdote. An N of one with a suspect, variable/inconsistent study method is, to that scope manufacturer, quite an injustice. Drawing your conclusion that X brand's Y line of scopes does or doesn't hold zero, says a lot about X brand, and definitely influences a lot of consumers. Just looking at the various scope review threads here, they are in the several thousands, even for the more obscure ones. So, my point is that these statements of "not holding zero" while for that individual scope is true, it may not accurately reflect the 'average' quality of that optic line. Equally, because a scope passed a drop test, given aforementioned variability, it may overstate that individual optic line's quality/durability, and overstate the 'average' quality of that brand. I have to assume that every scope brand produces a few lemons in their lineup. An N of 1 doesn't speak to every scope, or even the average. Get my drift? We need an N greater than 1.

Any archery hunter's watch Lusk Archery Adventures broadhead reviews on Youtube? Those are great methods. Virtually every test is standardized, repeatable, with minimal chance for error or variability. The data he collects is quantitiative rather than binary. I know, broadheads are much easier and cheaper to test than riflescopes, but it's a good example of rather repeatible, scientific testing and data collection.

While I can appreciate the anecdotes and the spirit/intent of these field tests as really solid information to help guid gear selection. As to the drop test, it may be representative all the while they may not, and I wouldn't put 100% stock in them.

That's about all I've got for this month's Journal Club.

I look forward to seeing your videos and posts regarding your own durability testing protocols.......
 
Back
Top