Cliff Grays Podcast with Aaron Davidson

Some companies and individuals are doing their best to move the gun industry forward. It’s a tiring and loathsome effort when you have to repeat yourself ad nauseam to different “first time listeners”. I learned this over 15 years ago when I started instructing hunting focused and long range shooting classes. It’s even worse today with the internet exploding.

The words spoken can then come across in different “fashions” depending on the listener, and whatever preconceived notions they may have relating to the overall “gun topic” at hand. Again, the internet is mostly a curse here as it’s easy to spread nonsense with many believing it, and then never trying it for themselves.

Regardless of your opinion of the material, or how the material is presented, cheap shots from the nose bleeds of a keyboard regarding someone you don’t know is classic internet nonsense.

@Aaron Davidson could be the biggest douche bag on the planet, but if his shit works, that’s all that matters to me. And probably all that matters to him too.
 
Regardless of whether you call them custom, or small-batch production, etc. R&D is a cost the same way a machine is a cost, and that $ has to be built into the price of each item sold. The reason small gunsmiths can turn out super good work at a "reasonable" price is in part becasue they dont have to invent the product design and the manufacturing process for each item. The reason a tikka is so inexpensive is becase the R&D and production processes are spread across so many units, and becasue they have minimized the number of parts, etc so they can do even larger production batches. A company like gunwerx that does a lot of R&D, plus has to put that into production and buy tooling, etc, AND doesnt sell a lot of units...well, the R&D cost and setup costs are simply divided into fewer items. If you've designed and built your own stock, your own action, your own this and your own that, well, the cost per item gets pretty high, compared to if you simply put something togeehr from existing off-the-shelf parts from another vendor. It's exactly the reason why almost universally there are diminishing returns as you go up in price for any given product category, ie you have to pay 2x as much to get a product that's 10% better. Not everyone is willing to pay that. That's OK, gunwerx and others like them arent trying to sell a rifle to everyone.
You're a rather verbose fella. Nevertheless not sure what the word salad has to do with my post. I am pretty sure I said their prices aren't outrageous as many seem to think.
 
@Quandary, guilty as charged. Sorry, hard to change at this point. Reminds me of a former colleague who refused to eat green things. "Salad? That's what FOOD eats!". Apologies for the word salad.

I was only trying to emphasize what you said, as I think some could have seen the price differences (7k for gunwerx, 6400 for custom, and 4500 for a "diy/prefit custom"), and said it seemed like a huge markup. My point was only that whatever markup is there isnt just profit margin.
 
Yes......the prototypical little man with Napolean syndrome. Like I said before, he relies on lazy, ignorant, lardasss lawyer types to believe his hype.
Dude, I think you made your feelings clear on your first post.

At this point it’s just becoming annoying and making you look like you have a syndrome.
 
@Quandary, guilty as charged. Sorry, hard to change at this point. Reminds me of a former colleague who refused to eat green things. "Salad? That's what FOOD eats!". Apologies for the word salad.

I was only trying to emphasize what you said, as I think some could have seen the price differences (7k for gunwerx, 6400 for custom, and 4500 for a "diy/prefit custom"), and said it seemed like a huge markup. My point was only that whatever markup is there isnt just profit margin.
Got ya! That makes sense. Thank you for clarifying!!
 
I think the point Aaron was making was not specifically to invalidate the drop test itself. He was a fan the way I heard it. The point he was making, my understanding, is that the product of the drop test is a pass/fail on a scope. Yet, it's not a scope specific test, but rather a system level test. Said another way, we don't say the scope lost zero so the Tika failed. The scope lost zero the ammo lot failed. The scope lost zero the UM rings failed. The product is the pass/fail on a scope when many of the variables aren't controlled. His beef, again my understanding, is that it could deep six a manufacturer. I could be completely wrong here also with my take on his point. But as i stated earlier I can see both sides and I personally put more faith in Form's work and opinion than any other data published to date on scope reliability, even though it's based on system level test. Also, Aaron's point seems correct also. It could possibly be improved upon, like everything else in life.
This ^

Remove ALL other variables.

Drop test your rifle and you would absolutely know and have confidence if your system is solid if it passed which is what the drop test is doing. However there is an attempt to remove additional variables but it’s not perfect.

With all the talk around statistical relevance, and I genuinely do not know, I’m going to assume that the RS test hasn’t tested all brands and specific models with 30 separate scopes in most cases bc cost.
 
I personally had a friend that worked for Aaron, now has own little shop. He straight up said he is egotistical, Genius, that is both a great boss and hard to work for. But the guns and systems and process and RD was / is incredible to watch. I get how the comments might trigger people but I dont think he is wrong about his clients and who he wants to attract. He wants to sell developed and easy to shoot weapon systems.

I laugh when people say “you can get a small smith to build the same thing for 1/2”

Like in what world? Look around go on instagram and look at the 1-2 man shops that are literally only chambering blanks as needed (proofs or someone else’s barrel) and assembling rifles with dealer prices and marking up. Hell the only thing most do that’s their own is a cerakote job. And you are paying 4.5-6 grand no optics…. That’s GW prices lol.

Assembling a gun of someone else’s OTS components is not what GW does.

That cost argument with GW’s is so invalid these days.


You can hate all you want but the fact is, they have been at the leading edge with very few others in most things we see today as “common”
 
Who are you kidding? You don't need a custom gun of any maker to do what GW does. That's the point.
Who designs and/or manufactures their own stocks / actions/ barrels/ optics with its own ballistics system?

I’m confused what triggered you.
 
I hope if I ever build an innovative company like Gunwerks, people go on the internet and criticize me. Every negative word is either free advice or free publicity or both.

Just saying....
;)
 
We've been subject to this kind of criticism from the beginning. Its fine. The comments about arrogance are probably truish. I have a lot of confidence in my knowledge of the shooting system (to extreme detail), the capabilities of people and equipment and the accomplishments of my company and team. I've made peace with the fact that not everyone is a customer. Fine.

The personal insults are really lame. You don't know me. Con man? What are you talking about? We are the most transparent rifle company out there. We film and teach everything we do, the shop is open to tours anytime, our customers enjoy a unequivocal satisfaction guarantee, our prices are transparent. Your attacks show your ignorance, do better.

The truth is most people/shooters don't understand the significance of finer detail and design execution. For the longest time the Mcmillan A series was the epitome of LR stock design. Everyone copied or used it. We went a diff route, applying specific engineering principles to create a completely different shooting experience. If you never used one you had no idea. Just like driving a corrolla vs a mercedes ("the tikka is just as good"). Now you see our design geometry and ethos showing up in everyones rifle stocks (after 20 years) and everyone benefits. Times that by dozens of similar examples and you'll start to understand the confidence....

I don't doubt your company makes high quality rifles and stocks. I just don't think you have developed anything so special to justify the prices you charge.

Why? Because, I don't think anyone in the industry can develop the very best of everything.

Best of luck though!
 
This ^

Remove ALL other variables.

Drop test your rifle and you would absolutely know and have confidence if your system is solid if it passed which is what the drop test is doing. However there is an attempt to remove additional variables but it’s not perfect.

With all the talk around statistical relevance, and I genuinely do not know, I’m going to assume that the RS test hasn’t tested all brands and specific models with 30 separate scopes in most cases bc cost.
I finished the podcast through the scope testing portion. Clearly, Aaron did not read the notes on the scope testing, or he would have known that many of those variables are either addressed, or at least semi controlled for. Not all, but several of the ines mentioned are.
Two points to make on this

First is the statistical significance of multiple failures in a row with a small sample set, as opposed to multiple passes from a small sample set, those are two significantly different situations. If something is generally pretty good, tell me what the mathematical odds of 3 failures out of three tests is?

Second, while I have long said that the drop test here are not scientific and could be improved on specifically because of the uncontrolled nature of the drop surface and the way that the scope is dropped so there’s no way to tell exactly what landed first and exactly what angle, etc…. there is still a massive value in having something “open source” available to anyone that isnt proprietary and can be done without the $50,000 test equipment. Because it isn’t helpful to me if a manufacturer tests their own scopes. It’s really only helpful to me If I can see the same test applied to multiple scopes and see how they compare, and that isn’t happening anywhere else. If a test is pretty repeatable with similar results (and this one seems to be), then it has some validity—I see the theoretical limitations, and I see why a company like gunwerks doesnt do scope testing across brands, but that doesnt help me. If those results aren’t available to me across brands and models, it doesn’t matter how valid a test is, it is utterly and completely useless to me. Anyone that tears down the test here is doing nothing more than blowing hot air until they replace it with something better that is available to me. Because until I have a better option, this is the ONLY option other than sticking my head in the sand. People can worry all they want about throwing out good scopes because of a flaw in the test, personally I don’t give a flying hoot about that, all I care about is having a better chance of getting one that works reliably. I’ve had a 75% failure rate on my personal scopes from one manufacturer—3 out of 4 failed—and I’d much rather not do that again. Someone can say I don’t know if It was the rings or the mounts or whatever, but as soon as I put a new scope in those exact same rings, torqued to those exact same specs and the groups tightened up and stopped wandering, every single time, I know all I need to know. Put the old scope in, back to the wandering zero. New scope, doesn’t move. Old scope, moves all over the place. Etc. Yes, theres more variables, but it isnt rocket surgery to narrow it down to the scope when you can switch it on and off by doing nothing more than switching scopes.

@Aaron Davidson you guys finished that podcast talking about doing a tour of your scope testing. I for one will look forward to seeing that, I’m curious about the nitty-gritty details, and also curious about how scopes other than yours fare under your testing and how I can apply your results to my own situation. Personally, having had multiple failures in the past, I’m not willing to simply trust, I want to see it in detail. Id encourage you to familiarize hourself with the details of the evals here so you can speak to it better as far as how some of the variables are controlled or semi-controlled. Here’s a link
 
Really? We are talking accurate hunting rifles?Aaron Davidson says his market is the ultra wealthy.Why not add the other two names in the mix, Mark Penrod and Gene Simillion? They build the most accurate and reliable hunting rifles in the world, along with a few others not named. Not only the rich buy rifles from these world known builders but some average dudes who are avid experienced hunters who want the best most reliable rifles made.
Do they have their own ballistic software / optics line? Are they really good and using someone’s blank, other own, chambering it, to their own I. House manufactured action and putting it in a stock they designed for the hunter with explicit purpose?

Just answer yes or no.


I think the smooth brains really miss the point, sound poor, then hurl insults when this whole thing started about a podcast and a drop test.
 
I finished the podcast through the scope testing portion. Clearly, Aaron did not read the notes on the scope testing, or he would have known that many of those variables are either addressed, or at least semi controlled for. Not all, but several of the ines mentioned are.
Two points to make on this

First is the statistical significance of multiple failures in a row with a small sample set, as opposed to multiple passes from a small sample set those are two significantly different situations, if you have three failures in a row statistically that says something different than if you have three passes in a row, each from a sample set of three. If something is generally pretty good, tell me what the mathematical odds of 3 failures out of three tests is.

Second, while I have long said that the drop test here are not scientific and could be significantly improved on specifically because of the uncontrolled nature of the drop surface and the way that the scope is dropped so there’s no way to tell exactly what landed first and exactly what angle, etc…. there is a massive value in having something “open source” available to anyone that isnt proprietary and can be done without the $50,000 test equipment. Because it isn’t helpful to me if a manufacturer tests, their own scopes, it’s really only helpful to me. If I can see the same test applied to multiple scopes and see how they compare, and that isn’t happening anywhere else. If a test is pretty repeatable with similar results (and it seems to be), then it has some validity—I see the theoretical limitations, and I see why a company like gunwerks doesnt do scope testing across brands, but that doesnt help me. If those results aren’t available to me across brands and models, it doesn’t matter how valid a test is, it is utterly and completely useless to me. Anyone that tears down the test here is doing nothing more than blowing hot air until they replace it with something better that is available to me. Because until I have a better option, this is the ONLY option other than sticking my head in the sand. People can worry all they want about throwing out good scopes because of a flaw in the test, personally I don’t give a flying hoot about that, all I care about is having a better chance of getting one that works reliably. I’ve had a 75% failure rate on my personal scopes from one manufacturer—3 out of 4 failed—and I’d much rather not do that again. Someone can say I don’t know if It was the rings or the mounts or whatever, but as soon as I put a new scope in those exact same rings, torqued to those exact same specs and the groups tightened up and stopped wandering, every single time, I know all I need to know. Yes, theres more variables, but it isnt rocket surgery to narrow it down to the scope when you can switch it on and off by doing nothing more than switching scopes.

@Aaron Davidson you guys finished that podcast talking about doing a tour of your scope testing. I for one will look forward to seeing that, I’m curious about the nitty-gritty details, and also curious about how scopes other than yours fare under your testing and how I can apply your results to my own situation. Personally, having had multiple failures in the past, I’m not willing to simply trust, I want to see it in detail. Id encourage you to familiarize hourself with the details of the evals here so you can speak to it better as far as how some of the variables are controlled or semi-controlled. Here’s a link
Run your argument thru ChatGPT as ask it .

“Where am I wrong”
Prepare to be humbled. It’s alot but again.

Drop tests aren’t bad, they are great, but they invite bias bc they simply aren’t controlled.
 
Run your argument thru ChatGPT as ask it .

“Where am I wrong”
Prepare to be humbled. It’s alot but again.

Drop tests aren’t bad, they are great, but they invite bias bc they simply aren’t controlled.
How about you enlighten me. I assume you’re referring to the portion about small sample sets and passing versus failing. What I wrote is poorly worded, but if you start with an assumption that a failure rate should be quite small, then the odds of getting three failures in a row is extremely small. An item with a 10% failure rate, which I think is much, much too high, has a 0.1% chance of failing three times in a row. That’s what I’m saying. The odds of getting multiple failures in a row from a product that has a low failure rate, is so small that I’m more than willing to play those odds.

And, what’s the alternative? Because if you don’t have anything better, it doesn’t matter how bad it is. It’s still the best there is. Again, I DO NOT CARE if good scopes fail. As long as bad scopes don’t pass, it has value to me. What is my alternative? Can you tell me that in this case the perfect isn't the enemy of good?
 
I finished the podcast through the scope testing portion. Clearly, Aaron did not read the notes on the scope testing, or he would have known that many of those variables are either addressed, or at least semi controlled for. Not all, but several of the ines mentioned are.
Two points to make on this

First is the statistical significance of multiple failures in a row with a small sample set, as opposed to multiple passes from a small sample set, those are two significantly different situations. If something is generally pretty good, tell me what the mathematical odds of 3 failures out of three tests is?

Second, while I have long said that the drop test here are not scientific and could be improved on specifically because of the uncontrolled nature of the drop surface and the way that the scope is dropped so there’s no way to tell exactly what landed first and exactly what angle, etc…. there is still a massive value in having something “open source” available to anyone that isnt proprietary and can be done without the $50,000 test equipment. Because it isn’t helpful to me if a manufacturer tests their own scopes. It’s really only helpful to me If I can see the same test applied to multiple scopes and see how they compare, and that isn’t happening anywhere else. If a test is pretty repeatable with similar results (and this one seems to be), then it has some validity—I see the theoretical limitations, and I see why a company like gunwerks doesnt do scope testing across brands, but that doesnt help me. If those results aren’t available to me across brands and models, it doesn’t matter how valid a test is, it is utterly and completely useless to me. Anyone that tears down the test here is doing nothing more than blowing hot air until they replace it with something better that is available to me. Because until I have a better option, this is the ONLY option other than sticking my head in the sand. People can worry all they want about throwing out good scopes because of a flaw in the test, personally I don’t give a flying hoot about that, all I care about is having a better chance of getting one that works reliably. I’ve had a 75% failure rate on my personal scopes from one manufacturer—3 out of 4 failed—and I’d much rather not do that again. Someone can say I don’t know if It was the rings or the mounts or whatever, but as soon as I put a new scope in those exact same rings, torqued to those exact same specs and the groups tightened up and stopped wandering, every single time, I know all I need to know. Put the old scope in, back to the wandering zero. New scope, doesn’t move. Old scope, moves all over the place. Etc. Yes, theres more variables, but it isnt rocket surgery to narrow it down to the scope when you can switch it on and off by doing nothing more than switching scopes.

@Aaron Davidson you guys finished that podcast talking about doing a tour of your scope testing. I for one will look forward to seeing that, I’m curious about the nitty-gritty details, and also curious about how scopes other than yours fare under your testing and how I can apply your results to my own situation. Personally, having had multiple failures in the past, I’m not willing to simply trust, I want to see it in detail. Id encourage you to familiarize hourself with the details of the evals here so you can speak to it better as far as how some of the variables are controlled or semi-controlled. Here’s a link

Listen to Rokcast where they talk about the redesign and how it preformed after testing. Ryan even gets invited to use GW equipment before and after. Also Aaron does a great job outlining the variables and why it's not as straight forward as some want.

Personally I wanted to turn it off at first but continued to listen and was plenty fascinated with last half of the Rokcast episode.

Sent from my SM-S926U using Tapatalk
 
It’s so interesting to continually hear people talk about something with so much assurance when they haven’t read what is actually done, don’t understand it, and have never attempted to replicate it.

You know, something like the scientific method.
 
Back
Top