Questioning the "gold Standard Drop Test" and the conclusions of "This scope brand does/doesn't hold zero"

QuikFire

FNG
Joined
Oct 4, 2021
Messages
52
I want to start off with: By calling into question the beloved drop test, I'm not (intentionally) trolling here. With that said, I do love the idea behind this test, an attempt toward an objective, scientific means of assessing the durability of a scope going through a field test to do its job of staying on target.
It does help us as consumers with information to guide our gear selection, but in thinking critically about it, there is room for error within. This contemplation was triggered by a good friend's die hard advocacy, and justification of his purchase of scope brand X, cuz at the end of a long justification... "it held up to the drop test".

I think that the scientific methods of these tests could be improved, and that's what I want to talk about.

First, I am by trade and training, somewhat of a scientist. I'm not the lab tech guy in the white lab coat and googles, but I am a doctor, a surgeon, and read and critique scientific papers to evaluate the published studies of our profession. We have a little monthly tradition called 'Journal Club' where we sit around, let the libations flow, and discuss a latest medical paper. We praise it for it's strengths and contributions to medical/surgical care, then rip it apart for all of it's weaknesses in methods, irrepresentative study population and poor design. So, without further ado, crack a cold one and let's dice apart these methods and how they might be improved.

First, strengths; well... It accurately exposes the ones that don't hold zero. We have a 100% true positive here - the ones that are dropped, are not holding zero; That scope, individually, failed.
However...
The ones that do hold zero; is it then a quality scope? That is what we assume. But, did it get dropped on the same impact point, with the same impact force, same system weight to equilibrate momentum, etc, as the other that didn't hold? Dropping a rifle onto matted, tarped, variable surfaces, lets in a lot of room for variability between drops. Ie. This test is not truly repeatable. Without repeatability and consistency between POI, Force, Momentum, I don't think there's an argument that says you've effectively included all scopes that do not hold zero up to X amount of force. There are perhaps scopes that don't hold zero, that passed the test.

I was reading another scope review, I think a Maven RS with a test I really appreciated; can't remember where it was, but they dropped a stated weight (28 oz iirc) hammer onto the turret, front housing, rear objective, focus/paralax adjustment, etc, with a consistent pendulum drop, creating rather repeatable force/momentum onto the rifle/scope system. I though this was more repeatable, constant designed test than the dropped gun onto matted ground.

The results of "holding zero": So, most of these scopes that "pass" still have some variance off of true zero. So, rather than a binary yes/no, there seems more appropriate to have a value or degree of variance. Why not measure the degree off zero after x ft lbs of impact to these specific points? We could also find what force is required to put a given scope off zero. I'm certain every scope has it's breaking point.

The N. The "N" refers to number of subjects N is the biggest factor to consider in study design. It amalgamates group data to represent an outcome from enough subjects to detect a difference. A single scope tested is an N number of 1. This is then, is not a study but an anecdote. An N of one with a suspect, variable/inconsistent study method is, to that scope manufacturer, quite an injustice. Drawing your conclusion that X brand's Y line of scopes does or doesn't hold zero, says a lot about X brand, and definitely influences a lot of consumers. Just looking at the various scope review threads here, they are in the several thousands, even for the more obscure ones. So, my point is that these statements of "not holding zero" while for that individual scope is true, it may not accurately reflect the 'average' quality of that optic line. Equally, because a scope passed a drop test, given aforementioned variability, it may overstate that individual optic line's quality/durability, and overstate the 'average' quality of that brand. I have to assume that every scope brand produces a few lemons in their lineup. An N of 1 doesn't speak to every scope, or even the average. Get my drift? We need an N greater than 1.

Any archery hunter's watch Lusk Archery Adventures broadhead reviews on Youtube? Those are great methods. Virtually every test is standardized, repeatable, with minimal chance for error or variability. The data he collects is quantitiative rather than binary. I know, broadheads are much easier and cheaper to test than riflescopes, but it's a good example of rather repeatible, scientific testing and data collection.

While I can appreciate the anecdotes and the spirit/intent of these field tests as really solid information to help guid gear selection. As to the drop test, it may be representative all the while they may not, and I wouldn't put 100% stock in them.

That's about all I've got for this month's Journal Club.
 
Until you put up your money, your time doing your "scientific" tests then opinion means nothing.

When you drop your rifle in the field do you drop it in the exact same way on the exact same spot at the exact same force?

No. So what does it matter? Then it will just turn into "well if your rifle gets dropped a different way then the tests don't mean anything....." They show patterns. Believe in them or don't. If you're confident that your leupold will hold zero, get on with your life or go collect a free trip like has been offered multiple times.

Take them for what you paid for them.
 
The results of "holding zero": So, most of these scopes that "pass" still have some variance off of true zero. So, rather than a binary yes/no, there seems more appropriate to have a value or degree of variance. Why not measure the degree off zero after x ft lbs of impact to these specific points? We could also find what force is required to put a given scope off zero. I'm certain every scope has it's breaking point..
Just to ask about this point…as I may be misunderstanding you, do your rifles and scopes not have variance? It sounds like you are saying the bullet is expected to go to exact point of aim every shot. I’ve never seen or heard of those results before, so is the variance you are referencing a bullet that goes outside of the expected area of impact, but is ignored or just stated as being in the area of expected impact. Thereby misleading readers into thinking there wasn’t a shift?

If that’s what you’re referencing, I’ve not noticed that happen. Do you have an example you can link to?
 
I think you’re making a valid observation about not enough data when testing just one scope brand/model from a product line. But, what is the solution? 3 scopes, 5, 20 and how does this get funded? Since funding/access to a reasonable amount of scopes would be cost prohibitive. Probably the best thing to focus on is what you brought up. The method in which they are tested with the example of the hammer impacts. That would at least reduce slight variables: for example you drop a scope and it lands evenly on the front bell objective vs you drop the scope and it hits the “corner” of the bell/objective at a slight angle. I would think those impacts are very different even though they are both impacting the front of the scope.
 
Until you put up your money, your time doing your "scientific" tests then opinion means nothing.

When you drop your rifle in the field do you drop it in the exact same way on the exact same spot at the exact same force?

No. So what does it matter? Then it will just turn into "well if your rifle gets dropped a different way then the tests don't mean anything....." They show patterns. Believe in them or don't. If you're confident that your leupold will hold zero, get on with your life or go collect a free trip like has been offered multiple times.

Take them for what you paid for them.
1. Fair.

2. No, agree, but then its variable anecdotes " I dropped my rifle off a cliff and it still shot good".
I don't think variability was at all the intent of publishing standardized reviews of rifle scopes. Correct me if I'm wrong, but it seems the reviews are attempting to approach an objective, standardized means of determining failure of holding zero. I'm highlighting a point that there is room for error. It could be tightened up, assuming that is what the test seems to be going for.

3. No, of course not. But,Are you upset that I just questioned what you accept as gospel? What is this assumption that I own any Leupold products. I own shit optics, and am shopping for one and using this for some guidance, but noticing that there's potential for improvement. Look, if a guy is going to post 200 reviews using the same methods of various rifle scope, using the same methods, but with ******* variability and doesn't think a single asshole like me is going to say, "hmm, well did you think about...." Sure, it questions the gospel that led you to buy your X that passed his ******* drop test. I'm glad for you, you probably bought a great optic. It's okay. I'm not saying you didn't.

4. Fair. They're helpful.
 
Just to ask about this point…as I may be misunderstanding you, do your rifles and scopes not have variance? It sounds like you are saying the bullet is expected to go to exact point of aim every shot. I’ve never seen or heard of those results before, so is the variance you are referencing a bullet that goes outside of the expected area of impact, but is ignored or just stated as being in the area of expected impact. Thereby misleading readers into thinking there wasn’t a shift?

If that’s what you’re referencing, I’ve not noticed that happen. Do you have an example you can link to?
basically i'm suggesting... refine the impact force, poi, and then based on that, then, how far zero the group average shot from previous zero (group). I imagine some deviate more than others. Basically, I'm overthinking data collection, not suggesting there is no variance in the system (ie. Grouping).
 
Honestly, anything coming from someone in the medical profession right now sticks in my craw. Scentific conclusions? Like that whole Covid origin and MRNA vaccine safety BS pushed by the whole medical community?
I’m sorry but anything from a doctor these days doesn’t impress me much.
No credibility.
 
basically i'm suggesting... refine the impact force, poi, and then based on that, then, how far zero the group average shot from previous zero (group). I imagine some deviate more than others. Basically, I'm overthinking data collection, not suggesting there is no variance in the system (ie. Grouping).
I hear ya. So a full zero group of 10+ shots after the drop tests.

I think it becomes analysis paralysis at some point with this though. Not the N. part, but the rest of it. Trying to fish out that 0.07 mil shift, was it because the action shifted in the stock, or rings on the action, or scope in the rings, or scope itself? Variables can only be controlled so much for so much detail.
 
1. Fair.

2. No, agree, but then its variable anecdotes " I dropped my rifle off a cliff and it still shot good".
I don't think variability was at all the intent of publishing standardized reviews of rifle scopes. Correct me if I'm wrong, but it seems the reviews are attempting to approach an objective, standardized means of determining failure of holding zero. I'm highlighting a point that there is room for error. It could be tightened up, assuming that is what the test seems to be going for.

3. No, of course not. But,Are you upset that I just questioned what you accept as gospel? What is this assumption that I own any Leupold products. I own shit optics, and am shopping for one and using this for some guidance, but noticing that there's potential for improvement. Look, if a guy is going to post 200 reviews using the same methods of various rifle scope, using the same methods, but with ******* variability and doesn't think a single asshole like me is going to say, "hmm, well did you think about...." Sure, it questions the gospel that led you to buy your X that passed his ******* drop test. I'm glad for you, you probably bought a great optic. It's okay. I'm not saying you didn't.

4. Fair. They're helpful.

I do get what you're saying. But I'm trying to convey you're asking someone who is spending countless hours (for free) "hey, if you can spend some more time and money to appease me and others who question your methods...."

I own some scopes that have "passed" the tests, some that haven't been tested, some Arkens, whatever.

I never said they're the "be all, end all" of scope testing, but unfortunately, they're what we have and I appreciate the time and money that have gone into them. It's more than the rest of the industry is doing.
 
@QuikFire

The Drop Test establishes trends that save the rest of us a lot of money, time and missed or wounded animals.

They do not need to be perfect. What has happened though is that the scopes that Pass, continue to prove reliable in the field by those that have based choices around the test.

The scopes that do Not Pass, continue to fail at a much higher rate than the Pass scopes in real life.

TLDR; Anecdotal observations from the field confirm what the Drop Test's have shown.
 
@QuikFire , welcome to the slide.
Beware of the dogmatic, they’ll eat you up!

I like the drop tests for what they are. Not perfect of course, but the ideal is to test your own scopes this way. I have. That way you know a little something about your scope and rifle system. It’s kind of like orthopedic testing…it gives you an idea, sometimes a really good idea, and other times the diagnosis of what is going on, depending on the test. But there is still the possibility of false positives and false negatives.
 
Honestly, anything coming from someone in the medical profession right now sticks in my craw. Scentific conclusions? Like that whole Covid origin and MRNA vaccine safety BS pushed by the whole medical community?
I’m sorry but anything from a doctor these days doesn’t impress me much.
No credibility.
That's fair. I remove diseased, dead organs, cancer, stop the process of death, with a knife. Lump me in with everything you don't believe in.
 
Honestly, anything coming from someone in the medical profession right now sticks in my craw. Scentific conclusions? Like that whole Covid origin and MRNA vaccine safety BS pushed by the whole medical community?
I’m sorry but anything from a doctor these days doesn’t impress me much.
No credibility.

Don’t quite understand why you have to go after a guy’s profession. Your post is the equivalent of an animals rights activist saying all hunters are poachers because some hunters poach….
 
@QuikFire

The Drop Test establishes trends that save the rest of us a lot of money, time and missed or wounded animals.

They do not need to be perfect. What has happened though is that the scopes that Pass, continue to prove reliable in the field by those that have based choices around the test.

The scopes that do Not Pass, continue to fail at a much higher rate than the Pass scopes in real life.

TLDR; Anecdotal observations from the field confirm what the Drop Test's have shown.
solid counterpoint, appreciate that reasoning and summation.

(even done so without personally insulting me, the medical community, or claiming I'm a Leuopold loyalist fanboy).
 
Don’t quite understand why you have to go after a guy’s profession. Your post is the equivalent of an animals rights activist saying all hunters are poachers because some hunters poach….
541(Hunter) an Oregon reference?
 
Back
Top