Questioning the "gold Standard Drop Test" and the conclusions of "This scope brand does/doesn't hold zero"

His profession was held out as qualification to impress us…or as a reason to trust his science-based conclusions.
Unless he is willing to distance himself from the people like Fauci who lied to us…then I’ll stand by comments.
I didn’t bring up his profession. He did.

Wouldn’t it be weird if someone came on here and stated that he and his buddies sit around a fire critiquing medical research papers without stating they are all medical doctors? Do you require all medical professionals to state their COVID beliefs prior to talking to them? It’s sad that these days when someone simply states they are a doctor triggers us enough to completely discount their opinion.
 
Best i can tell, the drop tests done here are basically just charity work. Its a guy who clearly has some kind of military or LE adjacent occupation, who started sharing some proof of his controversial opinions of what does and doesnt work. It snowballed into what it is now, which is still just a dude throwing some gear around to see if he thinks itll meet his / our needs or not. He only needs to test what he will use. We can all easily test our own stuff the same way or a more rigorous way if wed prefer.

We shouldnt need Form to test our stuff for us. We can just as easily go outside, toss it on the ground, twist the turrets up and down, etc to make sure it works. The big takeaway I get from his tests are a recalibration in my thinking. 10 years ago if youd suggested I should intentionally drop my rifle and see what happens, id have said hell no. Now, its one of the early things I do with a new setup, to make sure it will work when I need it to.
 
I don't think the OP was intending to devalue the drop test as much as he was trying to start a conversation about improving the idea of testing. I also never got the impression that he wanted to do any of the testing. I think he is saying simply that, due to the obvious large amount of interest, perhaps there is a way to drastically improve the testing and ultimately have a standardized test.

Maybe the manufacturers pay for an independent lab to administer impact testing in an off-brand procedure. Where the manufacturers foot the bill and send the lab what they need. That way, you can test N scopes exactly the way you want and decide if brand X model Y is reliable. The manufacturers already do their own tests, but those will never be unbiased tests.

Final thought, I wonder if something like this would resonate beyond this forum, to the folks that don't shoot very much, and have always bought brand X because it's the best, because the dude behind the counter at the gun shop told him so. This is mostly because we place a higher value on this "problem" than the people who will never experience a dropped gun and have a "problem". What I do think this would do is cause manufacturers to strengthen their products across the board.
 
IFinal thought, I wonder if something like this would resonate beyond this forum, to the folks that don't shoot very much, and have always bought brand X because it's the best, because the dude behind the counter at the gun shop told him so. This is mostly because we place a higher value on this "problem" than the people who will never experience a dropped gun and have a "problem". What I do think this would do is cause manufacturers to strengthen their products across the board.
I've read a couple other forums that mention this RS "test" and most of them think it's a joke, FWIW.
 
@QuikFire

A couple of things I'd encourage you to consider...

A scope is part of a system, and bench testing (the super repeatable application of an exact impact in an exact direction) may or may not replicate forces encountered in the field, mounted on a rifle. A test where the inputs are very controlled but not actually representative of what needs to be tested is no good. Vortex, Leupold, etc all have machines that "test" their scopes really precisely and yet they all shit the bed when put to field use (or even sitting in padded cases on the drive to the range).

The "uncontrolled" drops from knee and waist height (with a proof scope that demonstrates that the rest of the system is solid) demonstrate time after time that the scopes we know to be unreliable will fail to retain zero and the ones we know to have solid designs will retain zero. In some very real sense the drop eval (even with the variations in its inputs) is more repeatable in terms of real life results than the lab/bench tests that the manufacturers use.

Identifying small shifts can be difficult (and not needed). A gun that shoots a 1.5 MOA 30 round group could have a 1MOA shift in zero, (say it shifted 1MOA right and the next shot was in the left side of the cone) that would not appear to be a shift. One of the challenges here is that first shot after the shift can "settle" the erector back to where it was and subsequent shots are back on original zero. So a 10-shot group after a drop to more precisely find center of the new zero doesn't always tell us anything (and dramatically increases round count/cost/time to do the eval).

You're not the only one who has expressed criticism, but like the others I don't see any specific methodological suggestions for ways to address those criticisms in a way that keeps the eval doable for the only ones willing to put forth the time and money to do the evals as they currently exist.
 
Vortex recently spent a fair bit of time and money building a recoil simulator. I'd be somewhat surprised if it doesn't have a rifle drop profile by now.
 
@QuikFire

A couple of things I'd encourage you to consider...

A scope is part of a system, and bench testing (the super repeatable application of an exact impact in an exact direction) may or may not replicate forces encountered in the field, mounted on a rifle. A test where the inputs are very controlled but not actually representative of what needs to be tested is no good. Vortex, Leupold, etc all have machines that "test" their scopes really precisely and yet they all shit the bed when put to field use (or even sitting in padded cases on the drive to the range).

The "uncontrolled" drops from knee and waist height (with a proof scope that demonstrates that the rest of the system is solid) demonstrate time after time that the scopes we know to be unreliable will fail to retain zero and the ones we know to have solid designs will retain zero. In some very real sense the drop eval (even with the variations in its inputs) is more repeatable in terms of real life results than the lab/bench tests that the manufacturers use.

Identifying small shifts can be difficult (and not needed). A gun that shoots a 1.5 MOA 30 round group could have a 1MOA shift in zero, (say it shifted 1MOA right and the next shot was in the left side of the cone) that would not appear to be a shift. One of the challenges here is that first shot after the shift can "settle" the erector back to where it was and subsequent shots are back on original zero. So a 10-shot group after a drop to more precisely find center of the new zero doesn't always tell us anything (and dramatically increases round count/cost/time to do the eval).

You're not the only one who has expressed criticism, but like the others I don't see any specific methodological suggestions for ways to address those criticisms in a way that keeps the eval doable for the only ones willing to put forth the time and money to do the evals as they currently exist.
Another consideration is that the majority of shooter/hunters don't shoot good enough to know if their scope has shifted or not. Most of the time shooter incapability is blamed on the equipment instead of themselves, as it's the easy button. People shoot best on the internet. Seen it a whole bunch of times guiding hunts for 16 years.
 
Says alot about you. I care because the Covid fraud ruined people's lives, careers, and generally our way of life. The fraud called a "vaccine" was a joke and every medical professional worth his salt knows that now and calls it such. The "vaccine", and ventilator protocol killed more people than covid did. There's a big difference of dying "with" covid and dying "from" covid. There's a really good reason it's called "practicing medicine".
The ventilator protocol??? So you have expertise in management of respiratory failure and ventilators, and you are trained in critical care medicine? Vents are and were last resort, when all other rescue measures have failed, and when necessary, utilize lung protective ventilation strategies; that are ultimately up titrated to attention to maintain normoxia when the lung tissue is completely garbage. That effectively treated a lot of people, but many still died. Well actually, vents were 2nd to last.. when the lungs completely failed, we used every available ECMO circuit to oxygenate the blood by machine. Of course there was a dismal survival rate with that.
So then where the **** were you when all this is going on? Were you in the ICU exposed to this stuff every day? Yeah no, I’m not taking your bait that you understand enough to condemn management of respiratory failure in the setting of COVID.

Back to using this is an analogy, we had a lot of anecdotes about what was good and bad treatment for Covid, but like a new rifle scope, we didn’t make blanket statement about a treatment if it had loose, variable protocols with one person doing OK with said treatment and call it science.
 
Real question here. How many guys shoot year around? Most rifles get cleaned and put away until next season. I really don't know many guys who dial. The Coues deer hunters and a few Elk hunters seem to work the turrets more.. The drop test is helpful for sure, but I believe most guys buy brand (A) because of advertisement and they are gear junkies. I agree with B_Reynolds_AK it's about being consistent and clean kills with acceptable equipment. The drop test saves a ton of time and experimentation on our end.
 
I remember.
“Stay home until you can’t breathe “
“Don’t take ivermectin, hydro chloroquine (sp) or any steroids. Just wait until you can’t breathe and go to the ER.”
This “protocol” was treated as gospel and any deviation from it was ridiculed.
Of course, we know now that it was actually a propaganda campaign to “prove” that other effective treatments were NOT available for Covid…which was a necessary condition for the experimental MRNA vaccines to be emergency approved. TRILLIONS OF DOLLARS AT STAKE.
Pretty much the entire medical community enthusiastically participated in this hoax.
Then the ONLY approved government/hospital approved treatment was Fauci’s pet drug REMDISIVIR…quickly followed by intubation.
It also was a hoax. It was a complete failure…pretty much a death sentence.
To die alone because family was not allowed.
So, am I still pissed off at the medical profession?
You bet I am.
So when a “medical professional” struts into a hunting forum spouting his scientific expertise and poo-pooing real life facts? Yea, that rubs me the wrong way.
Are there still good people in the medical field? Of course.
This isn’t a blanket condemnation.
But an arrogant surgeon spouting his scientific theories just sticks in my craw.
His ego just leaks thru it all…
I’m OK with you taking your aggression out toward me because you’re upset with the medical community.

I just want to clarify, I made no scientific conclusions, nor did I spout any theories. I evaluated protocol for testing scope durability and its an ability to take [variable] impact hold zero. Apparently that was not read by you.

You don’t like doctors, that’s OK. But, if you’re ever shot, or in a high speed vehicle accident, your gallbladder ruptures, your colon ruptured, you have cancer, I’ll still be here for you.

Just so you know, I introduced my profession to prime this conversation with perspective on how I analyze research, not at all to inflate my ego. I really don’t need that, especially from an Internet forum. I’m rather secure with my place in the world. What I hoped for, and have at least partially conjured here, is an intelligent/intellectual dialogue of the merits and limitations of this drop test. I understand now that that might be an impossible ask from some of this community.
 
I want to start off with: By calling into question the beloved drop test, I'm not (intentionally) trolling here. With that said, I do love the idea behind this test, an attempt toward an objective, scientific means of assessing the durability of a scope going through a field test to do its job of staying on target.
It does help us as consumers with information to guide our gear selection, but in thinking critically about it, there is room for error within. This contemplation was triggered by a good friend's die hard advocacy, and justification of his purchase of scope brand X, cuz at the end of a long justification... "it held up to the drop test".

I think that the scientific methods of these tests could be improved, and that's what I want to talk about.

First, I am by trade and training, somewhat of a scientist. I'm not the lab tech guy in the white lab coat and googles, but I am a doctor, a surgeon, and read and critique scientific papers to evaluate the published studies of our profession. We have a little monthly tradition called 'Journal Club' where we sit around, let the libations flow, and discuss a latest medical paper. We praise it for it's strengths and contributions to medical/surgical care, then rip it apart for all of it's weaknesses in methods, irrepresentative study population and poor design. So, without further ado, crack a cold one and let's dice apart these methods and how they might be improved.

First, strengths; well... It accurately exposes the ones that don't hold zero. We have a 100% true positive here - the ones that are dropped, are not holding zero; That scope, individually, failed.
However...
The ones that do hold zero; is it then a quality scope? That is what we assume. But, did it get dropped on the same impact point, with the same impact force, same system weight to equilibrate momentum, etc, as the other that didn't hold? Dropping a rifle onto matted, tarped, variable surfaces, lets in a lot of room for variability between drops. Ie. This test is not truly repeatable. Without repeatability and consistency between POI, Force, Momentum, I don't think there's an argument that says you've effectively included all scopes that do not hold zero up to X amount of force. There are perhaps scopes that don't hold zero, that passed the test.

I was reading another scope review, I think a Maven RS with a test I really appreciated; can't remember where it was, but they dropped a stated weight (28 oz iirc) hammer onto the turret, front housing, rear objective, focus/paralax adjustment, etc, with a consistent pendulum drop, creating rather repeatable force/momentum onto the rifle/scope system. I though this was more repeatable, constant designed test than the dropped gun onto matted ground.

The results of "holding zero": So, most of these scopes that "pass" still have some variance off of true zero. So, rather than a binary yes/no, there seems more appropriate to have a value or degree of variance. Why not measure the degree off zero after x ft lbs of impact to these specific points? We could also find what force is required to put a given scope off zero. I'm certain every scope has it's breaking point.

The N. The "N" refers to number of subjects N is the biggest factor to consider in study design. It amalgamates group data to represent an outcome from enough subjects to detect a difference. A single scope tested is an N number of 1. This is then, is not a study but an anecdote. An N of one with a suspect, variable/inconsistent study method is, to that scope manufacturer, quite an injustice. Drawing your conclusion that X brand's Y line of scopes does or doesn't hold zero, says a lot about X brand, and definitely influences a lot of consumers. Just looking at the various scope review threads here, they are in the several thousands, even for the more obscure ones. So, my point is that these statements of "not holding zero" while for that individual scope is true, it may not accurately reflect the 'average' quality of that optic line. Equally, because a scope passed a drop test, given aforementioned variability, it may overstate that individual optic line's quality/durability, and overstate the 'average' quality of that brand. I have to assume that every scope brand produces a few lemons in their lineup. An N of 1 doesn't speak to every scope, or even the average. Get my drift? We need an N greater than 1.

Any archery hunter's watch Lusk Archery Adventures broadhead reviews on Youtube? Those are great methods. Virtually every test is standardized, repeatable, with minimal chance for error or variability. The data he collects is quantitiative rather than binary. I know, broadheads are much easier and cheaper to test than riflescopes, but it's a good example of rather repeatible, scientific testing and data collection.

While I can appreciate the anecdotes and the spirit/intent of these field tests as really solid information to help guid gear selection. As to the drop test, it may be representative all the while they may not, and I wouldn't put 100% stock in them.

That's about all I've got for this month's Journal Club.

Given the scope and intent of the drop test, I think the methods are adequate. I appreciate your critiques and hope that we could maybe steer the conversation towards what the development of what an industry standard could look like.
 
I’m OK with you taking your aggression out toward me because you’re upset with the medical community.

I just want to clarify, I made no scientific conclusions, nor did I spout any theories. I evaluated protocol for testing scope durability and its an ability to take [variable] impact hold zero. Apparently that was not read by you.

You don’t like doctors, that’s OK. But, if you’re ever shot, or in a high speed vehicle accident, your gallbladder ruptures, your colon ruptured, you have cancer, I’ll still be here for you.

Just so you know, I introduced my profession to prime this conversation with perspective on how I analyze research, not at all to inflate my ego. I really don’t need that, especially from an Internet forum. I’m rather secure with my place in the world. What I hoped for, and have at least partially conjured here, is an intelligent/intellectual dialogue of the merits and limitations of this drop test. I understand now that that might be an impossible ask from some of this community.
You came in hot with a topic thats been beat to death . Form put the info out there , do with it as you will .
I found from my personal experience I've benefitted greatly from the facts Form shares .
Too many people get butthurt because their Old Betsy has a scope that did not do so well in the tests , so they find fault with it , for no reason .
Much like the disdain expressed at you by couch potato doctors , in fact the irony is not lost on me ...pot meet kettle ...
 
I don't think the OP was intending to devalue the drop test as much as he was trying to start a conversation about improving the idea of testing. I also never got the impression that he wanted to do any of the testing. I think he is saying simply that, due to the obvious large amount of interest, perhaps there is a way to drastically improve the testing and ultimately have a standardized test.

Maybe the manufacturers pay for an independent lab to administer impact testing in an off-brand procedure. Where the manufacturers foot the bill and send the lab what they need. That way, you can test N scopes exactly the way you want and decide if brand X model Y is reliable. The manufacturers already do their own tests, but those will never be unbiased tests.

Final thought, I wonder if something like this would resonate beyond this forum, to the folks that don't shoot very much, and have always bought brand X because it's the best, because the dude behind the counter at the gun shop told him so. This is mostly because we place a higher value on this "problem" than the people who will never experience a dropped gun and have a "problem". What I do think this would do is cause manufacturers to strengthen their products across the board.
You read per my intention. Thank you.

The thing is, I really value this test, however, I don’t exactly know how to use the results, or how strongly I should consider them in a purchase, with confidence or not (when many accept as gospel).. Most of the dialogue in this thread is truly helpful here, and as intended, is bringing up counter points of validity and other perspectives to consider. That is exactly what was intended.
 
Given the scope and intent of the drop test, I think the methods are adequate. I appreciate your critiques and hope that we could maybe steer the conversation towards what the development of what an industry standard could look like.
Exactly. With this info readily available to the consumer.
 
The fundamental problem with the OP is that it misstates the purpose of the drop test. The drop test is not the end of a process. It is simply a useful starting point for picking the right scope. It’s easy for the OP to be confused about that, because many of the supporters of the drop test are confused about it.

Fundamentally, it’s not about brands, but about designs and features. We have enough useful data from testing enough scope designs to know which designs and features stand up better to a side-impact.

And even with that, it’s just part of the system. It goes along with mounting the scope properly, with good rings, on a tight rifle, with a good baseline accuracy and precision measurement.

For me, it’s just a little bit more information to keep me from spending more than I need to on a reliable sighting device for my rifle. I have a better idea of what features tend to make a scope reliable or not and I can look for those features, rather than worrying about whether I can see Pluto with my riflescope. Because I don’t need to shoot at a Plutonian, but I do need to know that my rifle is more likely to still hit at the same spot after driving several hours to go hunting and getting up before dawn on opening day to get into the woods. And the way I get confidence in that is by starting with a proven design and testing the hell out of it week after week.

Too many people want to have faith in their system and the drop test gives them a warm and fuzzy feeling, or it sows doubt and shakes their faith. Everything is “trust, but verify” in the shooting world. Faith is for God, not for the works of mankind.
 
The fundamental problem with the OP is that it misstates the purpose of the drop test. The drop test is not the end of a process. It is simply a useful starting point for picking the right scope. It’s easy for the OP to be confused about that, because many of the supporters of the drop test are confused about it.

Fundamentally, it’s not about brands, but about designs and features. We have enough useful data from testing enough scope designs to know which designs and features stand up better to a side-impact.

And even with that, it’s just part of the system. It goes along with mounting the scope properly, with good rings, on a tight rifle, with a good baseline accuracy and precision measurement.

For me, it’s just a little bit more information to keep me from spending more than I need to on a reliable sighting device for my rifle. I have a better idea of what features tend to make a scope reliable or not and I can look for those features, rather than worrying about whether I can see Pluto with my riflescope. Because I don’t need to shoot at a Plutonian, but I do need to know that my rifle is more likely to still hit at the same spot after driving several hours to go hunting and getting up before dawn on opening day to get into the woods. And the way I get confidence in that is by starting with a proven design and testing the hell out of it week after week.

Too many people want to have faith in their system and the drop test gives them a warm and fuzzy feeling, or it sows doubt and shakes their faith. Everything is “trust, but verify” in the shooting world. Faith is for God, not for the works of mankind.
Appreciate that thought. I’m the type, that wants to dissect into the nitty-gritty of each component of the system to know that said component is reliable to x degree. Essentially, I want to remove margin of error from every piece of the system.

Why I observed is that people generalized results from X scope to blanket the brand from an N of one. This is purely observational in a lot of of the chatter outside of the individual test. I.e., how many interpret the results.
 
@QuikFire it’s ironic that you started your initial post stating you didn’t want to cause an issue and then snap back at anyone who says anything you don’t like. Maybe that’s just a doctor thing

I am in the scientific field, as an engineer it is clearly understood that probability underpins any experimental data or study done. Actually the modern world is built on probability + factor of safety. It is possible to produce results that are very repeatable but you need to increase sample size and refine methods, as you said. By doing this you are also stating that hunting takes place in a vacuum. I understand how a surgical perspective on the world would lead someone to “want” this to be true. Unfortunately it’s not true.

If someone is willing to perform the repeatable drop test and publish clear results. As an engineer I am going to use that information as the best available at the time of decision and say thank you.

If the results of my decision are poor, I iterate. Just like the rest of the modern world.


Sent from my iPhone using Tapatalk
 
Oh come on , like any manufacturer is going to want the fact known that their product finished last in the drop test ? That is why what Form does is priceless , because no one else is doing it , IMO .
Almost all durability testing we have is from end users or in this case Form posting on the net. It’s doable, and could be improved into a protocol to produce a data set.
 
Back
Top