Cliff Grays Podcast with Aaron Davidson

Scopes are made to be used as part of a system. It doesn’t matter in the least what test they can pass in isolation, if they don’t work as intended in the system they are of no use.

It’s like saying an auto manufacturer has the best ignition system that passes durability tests, tested for a million starts, and provides all the right power requirements. When that ignition system is placed in the car half the time the car starts and half the time it doesn’t. You think people would tolerate that?

You don't seem to comprehend the difference in testing a scope vs testing the entire assembly.
 
Tangential complaint: Spotify kinda jaded me on this podcast if I'm being honest and I'm leery about listening to this episode even though this one sounds like it could be interesting. This podcast is not in my regular rotation but I listened to one of cliff's podcasts on a specific topic some time ago and now spotify CONSTANTLY tries to autoplay this podcast all the time on me which completely turned me off to it. I'll be in the middle of a string of "X" podcast with more episodes to go and suddenly its cliff's podcast. Spotify has done this with a couple others in the past and it just pisses me off and I end up actively avoiding listening to those specific podcasts due to it. I'm probably the anomaly but this is an instance of the "algorithm" shooting an artist/content in the foot by making their content annoying due to unprompted plays.
 
You don't seem to comprehend the difference in testing a scope vs testing the entire assembly.
I'm sure he comprehends it. The difference is that he's willing to accept that a scope can fail when it's demonstrated on a system where the other components have been demonstrated to be reliable. The crowd that disapproves of the drop testing have not 1. Presented and instituted an alternative 2. Proven whatsoever that the drop testing is invalid 3. Presented any data that the optics that fail the RS drop tests do hold zero.
 
I'm sure he comprehends it. The difference is that he's willing to accept that a scope can fail when it's demonstrated on a system where the other components have been demonstrated to be reliable. The crowd that disapproves of the drop testing have not 1. Presented and instituted an alternative 2. Proven whatsoever that the drop testing is invalid 3. Presented any data that the optics that fail the RS drop tests do hold zero.

valid points
 
Good podcast and good discussion here. Thank you Cliff for opening up the discussion on your pod.

Aaron is right about the benefit of eliminating variables when you want to draw conclusions from a test. The fewer variables, the more robust your conclusions can be. But that does not mean his "drop test" machine is a good replacement for the drop tests done here on Rockslide. To begin, there are questions that I did not hear addressed. (My apologies if I missed them while multitasking.)

Does he test direct impacts to the scope body/turrets/bell housing? Or, is he subjecting the scope to force through the mount/rings? (Think drop test, but the gun hits the ground rather than the scope hitting the ground. There is still a shock to the scope, but it's not the same as the scope hitting the ground.)

Does he test side and top impacts which would simulate a shooter falling or dropping the gun? Or, are they only frontal forces/impacts simulating the recoil of shots from high recoiling guns?

My general takeaway from the drop tests here on Rokslide is that the passes can be trusted more than the fails. A pass means that everything worked during the test. A fail means that something failed during the test. Full credit to Form, because he puts solid effort into making sure the failures are due to the scope and not failures in the rings, bedding job, etc., but there are still a lot of variables involved.

To say that a scope passes a machine drop test gives me confidence that every scope is subjected to more similar forces and more similar impacts. A drop test performed by hand over 1/2" foam on a variety of surfaces could have a suprising amount of difference in the force applied to each scope. Some of those variations include which part of the scope hits the ground first and at what precise angle? Also how hard is that particular patch of dirt compared to compact snow and any other surface tested on? The differences may be small, but they are still differences and can affect results. A controlled machine test can do a good job of ensuring that every scope experiences the exact same testing forces.

That being said, a controlled machine test does not necessarily give me confidence that a scope will survive drops or falls in the woods. Is the machine designed to simulate a fall or drop? Even if it is, few falls or drops on the side of a mountain will happen just like they would in a controlled machine. Currently, I have more trust in the results of Form's "real-world" test for showing how a scope will respond to an accident on a hunting trip. In my experience, hunting has plenty of accidents and unexpected variables. I want to know how much confidence I can have in my point of aim if I reach to pull my gun off my pack on a mountainside and it slips and hits the ground. Or if I loose my footing on a wet, grassy slope and go down hard. I think Form's drop tests give reasonable insight.

Form and Ryan's willingness to spend their own time and money testing scopes is admirable. They have given a real gift to anyone in the hunting and shooting community who wants insight into a scope's durability beyond the marketing from brands. Not to mention the effort Form puts into publishing the results and putting up with the inevitable public criticism. I hope those guys keep testing, publishing, and pushing the industry toward more durable scopes.
 
I have said elsewhere, and it's true, I joined this forum (instead of others) because of the drop testing. I've been on other gun forums, still am, and went looking elsewhere for public land elk advice, but chose to get it here because of the drop testing.

I'm trained in science - and statistics - and fully understand not only that the drop tests posted here aren't truly/fully 'scientific', but also, much more importantly, that they are likely the closest we'll ever see to valid tests. The fact that you can't put a confidence interval around the results doesn't mean thew results are of no value.

The fixtures it would take to properly test scopes would be spendy, and then if you really wanted to do it right you'd want a minimum of ten examples of every scope, and ideally three sets of ten, each from a different production batch. It's not going to happen. If you're waiting on scope makers to ship a box of thirty scopes of each model out for testing, you might as well just buy whoever has the best ads.

And then, if the fixture is mechanized, you run the risk of teaching makers to 'teach to the test' (using the analogy of public schools teaching kids to perform on standardized testing but still not be able to do math, etc) or start building scopes that better handle predictable forces but still might fail in a somewhat more random drop test (or a real world hunting fall). You can see examples of this with a certain maker who, 40+ years ago, developed a stellar reputation, advertising and selling scopes on the merits of their mechanized recoil tests (among other things) and in all fairness *did* make very recoil-proof scopes in an era where we still thought of scopes as fragile devices, yet still ended up putting out (very recoil proof) scopes that perform terribly in drop tests.

Build a lab test fixture and makers will build scopes that withstand the lab tests and still might fail under real use. It's that simple. So I, for one, welcome the somewhat random nature of the drop tests here. But I also recognize that a scope failing a drop test here doesn't mean everything that maker puts out is necessarily junk. I've got scope brands that have failed miserably here but have never budged for me, even when they were dropped, and some of them have been dropped quite hard, hard enough that I in no way would have criticized them if they'd been 'off'. I have one particular scope that got the tube bent in the late 90's and I hunted with it stuck on a fixed power for maybe a year then bent the tube (well, the tube/ocular junction anyway) back straight(er) and continued using it and it's still in use today. I come from a farm mentality where everything we own eventually breaks to some degree and we don't rush out to buy every replacement part, when we can weld the broken pieces back together or wire it shut or just learn to live without secondary features. Scopes are that way too. Or sometimes a guy wants high quality glass in a light platform for mountain hunts more than he wants heavy and bulletproof and he's willing to baby his stuff because he's always treated his stuff carefully and if he falls there's a backup rifle back in camp.

There are scope makers who get absolutely shredded in the reviews here. But they still advertise here and their stuff is still sold here at healthy prices in the used forums every day. Because people here realize that the drop tests aren't everything. They're awesome. The people behind them should get some sort of medal from the internet. But they aren't everything.
 
I see value in both test methods, Davidson's controlled and uniform tests using a controlled mechanism and Forms "full system" real world drops. Davidson's test ensures the same forces are applied to every scope and can control for exact impact locations and forces. Form's test has an element of randomness that is difficult or impossible to replicate in a controlled setting. Both tests are likely to find weaknesses missed by the other.

On second thought I reject both tests as inadequate. I propose a "throw test" similar to the backpack tests. We thrown an entire scoped rifle system tumbling down a rocky embankment and then see how well it performs. (did the sarcasm come through ok?)
 
If the ignition system tests fine isolation, but doesn’t when installed into the vehicle, there is an integration/interface problem that may or may not involve the ignition system at all.

Except the cheap Chinese replacement ignition system works fine every time and the fancy one fails no matter what car you put it in.

It’s an engineering dork problem, refusing to see what is evident bc all the data says it should work.
 
Tangential complaint: Spotify kinda jaded me on this podcast if I'm being honest and I'm leery about listening to this episode even though this one sounds like it could be interesting. This podcast is not in my regular rotation but I listened to one of cliff's podcasts on a specific topic some time ago and now spotify CONSTANTLY tries to autoplay this podcast all the time on me which completely turned me off to it. I'll be in the middle of a string of "X" podcast with more episodes to go and suddenly its cliff's podcast. Spotify has done this with a couple others in the past and it just pisses me off and I end up actively avoiding listening to those specific podcasts due to it. I'm probably the anomaly but this is an instance of the "algorithm" shooting an artist/content in the foot by making their content annoying due to unprompted plays.
this is hilarious (although annoying) because I have the same issue with a South Cox episode. the thought of having to hear me say "Hey guys..." every time I open spotify 🤦‍♂️
 
Also - the fact that two products are made in the same factory doesn't mean they are the same, or even similar. Parts and subassemblies may be made to different specs, production speed may be slower or faster to ensure greater QC, testing may be more intense for one or the other.

I worked in a factory many years ago. It had nothing to do with shooting, but we had two 'sides'. One did very commercial cheapest/fastest work for numerous clients. The other side did very precise work for a very large, picky customer. Each side was a different world and you'd get in trouble if you went between sides and tried to do things 'one way' instead of the other.
 
Davidson isn't wrong; if you're going to test a scope, you don't mount it in rings, mounted to an action, mounted in a rifle stock, then drop the whole assembly. While not repeatable in any way, NF seems to be able to test a one off without much trouble. Maybe smack on the turrets a little bit next time.

Any videos of people impact testing other scopes with a collimator like that? I'm guessing NF hasn't done it, but I'd be interested to see if it matches up with the drop test results.
 
I too have more faith in RS’s drop test. For the main reasons of it’s unbiased and it’s all about functionality under extreme conditions. If no scopes ever passed the RS drop test then I would say the test procedures are too extreme but scopes do pass the test.

I have seen rifles/scopes getting dropped once and losing zero or rendered non useable and I have seen rifles in padded cases that lost their zero during transportation to a hunt.

For those that don’t think the RS drop test is valid then ignore the results and believe whatever you want to believe.

For me, it’s more data to make an informed decision on before purchasing expensive components. So, I say thank you to Form, Ryan and others who are doing the work.
 
@Cliff Gray, let’s focus on what really matters… Biltong recipes.
I built a biltong box, holy crap is this stuff amazing. Don’t have to worry about what to do w my leftover elk meat anymore.
exactly. who cares about a scope not holding zero when you have a grip of biltong in your pack.... ha!

my house perpetually smells like coriander now
 
Back
Top