Questioning the "gold Standard Drop Test" and the conclusions of "This scope brand does/doesn't hold zero"

Its not meant to be a controlled lab experiment. Its a rough approximation of what level of abuse a scope might receive through its full service life all compressed into a short period. Its on representative examples and generally qualified with other observed experience about the models. Is it perfect, I guess not, but it is strong signal.
 
@Formidilosus earlier in the thread someone mentioned you have access to machining at SRS and UM.

I do not have access to machining. The person that wrote that does not know what they are talking about, or are being purposely deceitful- like several people n this thread.


Which makes me wonder. How much would a device that drops scopes more uniformly onto uniform surfaces cost to make?

Why do you believe that it is difficult to drop scopes the same way each time?



And then the cost of a colimator? Would those items be cheaper than the amount you guys are spending on ammo? This isn’t a knock on the current field evals.

A collimator is not a replacement for a rifle system no matter the cost- and it would be expensive.

Being that the OP doesn’t actually want a real discussion- what are the actual variables that you believe exist in the current field evals? Bear in mind, for those to be “variables”, it must cause inconsistent results.
 
Agree, which is why I use my gear 100+ days a year in the field hunting, hundreds of miles in ATV, SxS, etc. and take these "controlled " tests with a grain of salt at best.

For those who take them as the gospel (which is perfectly fine by me too), why dont' you throw around your rangefinder and see if it holds up? How about your binos and/or spotter?
Let me guess you didn't make varsity in speech and debate.
None of those items are subject to long term rifle recoil, which is what the scope evaluations try to replicate. So dropping them would be testing them for something they would never experince.
 
Let me guess you didn't make varsity in speech and debate.
None of those items are subject to long term rifle recoil, which is what the scope evaluations try to replicate. So dropping them would be testing them for something they would never experince.
Most people never throw their rifle/scope on the ground either. Obviously you failed Introduction to Logic 101 and spelling. I have years of professional baseball under my belt, does that count?
 
I do not have access to machining. The person that wrote that does not know what they are talking about, or are being purposely deceitful- like several people n this thread.




Why do you believe that it is difficult to drop scopes the same way each time?





A collimator is not a replacement for a rifle system no matter the cost- and it would be expensive.

Being that the OP doesn’t actually want a real discussion- what are the actual variables that you believe exist in the current field evals? Bear in mind, for those to be “variables”, it must cause inconsistent results.
Uniform ground surface. Better control of the angle turrets hits the ground. More exact measurement of the force hitting the turrets. Higher probability of catching a turret that slips one click right followed by a shot that is just a little left.

To be clear, I’m a huge fan of the field eval. I’ve made multiple purchases based on them. But I do wonder what the cost spread between the field eval and a lab experiment would be. Didn’t you say you guys had spent tens of thousands of dollars in ammo?
 
Uniform ground surface.

Better control of the angle turrets hits the ground.

More exact measurement of the force hitting the turrets.


Again- for it to be a variable that needs correcting- it must cause inconsistent results. If it doesn’t cause inconsistent results, then it isn’t causing inconsistent results- and doesn’t need correcting. What results do you believe are inconsistent?




Higher probability of catching a turret that slips one click right followed by a shot that is just a little left.

What?
 
First, I am by trade and training, somewhat of a scientist. I'm not the lab tech guy in the white lab coat and googles, but I am a doctor, a surgeon, and read and critique scientific papers to evaluate the published studies of our profession
I don't mean to be offensive but I don't think you are "somewhat of a scientist" based on that description. And professing that you are requires quite the ego. That's like someone with a Youtube channel dedicated to removing ingrown hairs saying that they're a surgeon. Or someone changing their own spark plugs and calling themselves a mechanic.
 
Sorry. Poor grammar. I’m at work.

A collimator would remove the human element of shooting to check for zero shift. I was trying to show an example of how that could happen. If a scopes shifts one click and then next shot is off in the opposite direction because of the human element, And if that shot shifts it back (does that actually happen?) it might not get noticed.

I don’t want my tone of voice to get thrown in with some of the more obnoxious critics in this thread. I am very grateful for the field evals and appreciate how much time and money has gone into them.

I’m wondering if a lab experiment might get taken more seriously than field evals by engineers at scope companies. If the cost spread is too high then it’s not worth it. But if it’s similar because of the cost of ammo, maybe it’s worthwhile. I’m assuming that’s why Nightforce does it that way, no?
 
I did (though I skimmed through the parts where people were taking unfair shots at you because of your profession)

Did you read the thread where the eval is fully explained?


Fwiw, I did not read the OP and subsequent posts by the OP that he hadnt read and didnt understand the evals. There is zero question that the protocol used is insufficient to pass a “scientific” threshhold as youd find in a standardized test such as those used by ASTM, UIAA, etc—it simply isn't consistent enough. In that regard, the OP is 100% correct. I just dont think thats really the goal of the evals, or at least not the entire goal.

Some relevant questions though, are:
1). what is the actual goal of the evals? Is it to create a “industry standardized test”? Or something different?
2) if a truly standardized engineer/scientist-approved test were designed, what would us as consumer LOSE? Ie is there value in the “quick and dirty” method that would not exist in the “sciency” method with regard to shooter education, troubleshooting, etc?
3) if a perfect test were designed, who will do it, and who will pay for it, and what will ensure that manufacturers do more than ignore it? Look at the pew science tests on suppressors and the controversy that creates because its a pay to play model where the “consolidated ranking” is based on a proprietary weighting, even if the raw data is public.

So imo the “criticism” is entirely valid IF you are approaching it like an “official” standard. The problem with that is that trying to make the evals “fully scientific” and 100% controlled/quantifiable, would necessarily AUGMENT the evals, not REPLACE the evals. So if you are trying to offer actual solutions, as opposed to just trying to poke holes, you should have some actual specific solutions, you should be specific about the goal, demonstrate an understanding of the current protocol in place that attempts to address that variable, as well as address both the positives and negatives of making that change.
More of the dialogue I’m trying to conjure here, thank you.

The evals, (per their intent- which I do better understand after the discussion early in this thread) are sufficient to their implied intent.

I provided an example in the OP of a type of impact/zero hold test that could be measured (newtons of force), precise (point of impact), and easily repeatable from system to system. These are my suggestions toward, perhaps a different test, and IMO, one that removes variability/confounding.

Another one of the statements I made in the OP was that this test has been referred to in many threads by many ppl as the “gold standard”, which, if we’re going to call anything a gold standard, we ought to improve the test and iron out variability.

Now, I understand that that might not be Forms intent, to be a gold standard or to control for variability, and that’s perfectly fine. Again, I’ve made countless references to my value of the complete evaluation and appreciate everything for what it is, and have a greater appreciation with my better understanding of Form’s intent.

The “guest of honor” was made neither in sarcasm nor in jest; it was a welcoming of his entrance into the conversation that we all were carrying forward.

Anyway, enough for now. Gotta get back to work.
 
I think this is a good reason Form keeps his professional career and his identity a secret. I appreciate this about him. He doesn't start off saying "I'm this profession or that". He lets the information and facts qualify his findings,, not his profession. I think doctors are so used to people just believing them for being doctors that they have the need to tell everyone.

That being said, I am not going to say what I do for a living.... :ROFLMAO:
I think we should all take the time to tell everyone a little bit about ourselves and what we do...

My name is Cyril and I like to party.
 
Wouldn’t it be weird if someone came on here and stated that he and his buddies sit around a fire critiquing medical research papers without stating they are all medical doctors?
Starting off with credentialism in an unrelated field was going to be an odd move no matter how he framed it. Claiming that reading some papers makes him "somewhat of a scientist" is even worse when paired with acting like he's passing knowledge down to illiterate peasants. As opposed to just fundamentally misunderstanding the goal of the testing in the first place.
 
Let me guess you didn't make varsity in speech and debate.
None of those items are subject to long term rifle recoil, which is what the scope evaluations try to replicate. So dropping them would be testing them for something they would never experince.
To be fair...

Any RF device has a zero, the relationship of the reticle to the laser. So yes, a RF device can lose zero, and in fact, most of them seem to come improperly zero'd...like my $3650 leica geovid pro ab+'s 🤦
 
A more repeatable protocol would be better
Higher sample size would be better

But to me the problem isn't understanding tests or statistics, its funding.
Who is going to fund potentially destructive testing of N>35 of dozens of riflescope SKUs, including the human time to test, analyze, and report that data?

Probably only attainable if an entity like PewScience, ProjectFarm, C_Does, etc took this on as a business and charged consumers for the results. Would be grateful if someone did, and I might throw something like $100 at it in order to drive improvements in the riflescope industry. But how many others would?


Happy Easter
We need a building. We need it to have enough room to house a bunch of testing equipment. We need it to have an indoor 100 yard range. We need some vice mounted "proof barrels/guns" fixtures like what hornady runs.

Im thinking minimum 2-3 milli for the location. Maybe 1-2 for the lab equipment. Probably another couple milli for the samples. And a few full time employees....

Let's call it 5-7 million for ease.

Or we can drop some scopes on a matt and if they fail they fail. I still can't imagine spending the amount they have just doing that.

A conclusion of "it fails" or "this one sample seems to work until proven otherwise" is pretty decent. Add in some samples and it seems pretty clear that leupy's arent the way to go. Trijis, swfa and NF might be.
 
Most people never throw their rifle/scope on the ground either. Obviously you failed Introduction to Logic 101 and spelling. I have years of professional baseball under my belt, does that count?
So you were able to understand the word I completely misspelled, but don't have the reading comprehension for this sentence: The drop tests replicate the cumulative effect of recoil.
It's not about people throwing their stuff on the ground. But you are always a contrarian on here, without regard for the details or facts. Carry on Bob Uecker. :)
 
I don't mean to be offensive but I don't think you are "somewhat of a scientist" based on that description. And professing that you are requires quite the ego. That's like someone with a Youtube channel dedicated to removing ingrown hairs saying that they're a surgeon. Or someone changing their own spark plugs and calling themselves a mechanic.
Are you saying that doctors aren’t scientists?

If so, who has produced all the clinical data, randomized controlled trials, compiled the evidence that shapes current medical practices ie evidence based medicine? If not science, by what process?
If you don’t think doctors are scientists, then you don’t at all understand medicine.
Again, while no one here will likely accept this, it isn’t about my ego, rather to frame the way I evaluate and critique tests and protocols and introduce this discussion as an (attempted) collegial and respectful format such as a journal club. You guys are interpreting it as ego boosting.
 
So you were able to understand the word I completely misspelled, but don't have the reading comprehension for this sentence: The drop tests replicate the cumulative effect of recoil.
It's not about people throwing their stuff on the ground. But you are always a contrarian on here, without regard for the details or facts. Carry on Bob Uecker. :)
Rifles fall over or are dropped all of the time. I want my rifle to be able to handle that. Mine fell over pretty hard at an NRL, maybe I shouldn't have leaned it against that side by side like that but it happened. Part way thru day 1. I really wasnt worried about it (NX8 in um tikka rings). Placed 5th 🤷‍♂️. Zero was fine. A scope handling recoil is like bare minimum, pretty sure the drop tests are there to replicate drops too.
 
Back
Top