I have not.Ever tried the Klassik fixed 8x56? I've had thoughts on buying one and sending it to Jerry to swap out reticle and add the 4.8 mil BDC dial.
I have not.Ever tried the Klassik fixed 8x56? I've had thoughts on buying one and sending it to Jerry to swap out reticle and add the 4.8 mil BDC dial.
While true if your goal is to find or know if there is/was a failure. However the rate of failure is vastly more important then if something has failed, as we know all scopes fail already.
Some good info on this page but you have to understand what you are looking at. Even the drop test that people are taking as gold, have a lot to be desired. For one the test would really need to be repeated the same way for each scope x amount of times to make the test relevant.
The info WindGypsy dropped is about as good as you’re going to get for rate of failure. An instructor for a sniper school that has mostly leupolds on issued weapon systems seeing between 5:1 and 10:1 failure rate. I’d love to see more numbers. How many scopes has he seen come through the school? Hundreds of scopes? Thousands? Regardless, a 5:1 -10:1 failure rate matches up with reports from serious users across every hunting/shooting forum on the internet for YEARS.
There’s a bunch of people suggesting that the type of tracking failure some of us have seen and repeated in our leupolds don’t really matter for hunters. You posted in the Long Range forum, so I expect you are intending to use this for a long range tool. IMO there’s no amount of tracking failure that’s acceptable. The whole reason we’re anal about our gear is to take variables off the table. Why take the chance?
Sent from my iPhone using Tapatalk
This has been covered in other threads, quite a few times. Others can probably speak to these points better than me, but fwiw. The slight variance in conditions mimics field situations, but apart from temperature and the exact substrate, they are pretty similar. Repeating the test 10x is impractical - it’s done on a volunteer basis with ammonia costs donated by some RS members. Are you familiar with the testing procedures?The drop test here are interesting and have peeked my interest. But as skeptic in everything I read, I had a lot of questions after reading through a lot them. Were the exact same conditions used on each test, to include exact height and angle, location, same substrate, atmospheric conditions, elevation etc. was shooter error factored in, ammo reliability etc. now the guy did a pretty good job with all that, and I trust we had a lot of the same conditions even if not exact. The test would be a lot more valid in my mind of each test is done say 10 times
Yeah I agree with most of that . And yes no pro but we hunt mid long range and keep growing.
And all this stuff is a great indicator. However we already know that any scope ever made can fail one way or another. What we really need to know is rate of failure by how much and we need to know that by which brands we are comparing. Which will never get.
The drop test here are interesting and have peeked my interest. But as skeptic in everything I read, I had a lot of questions after reading through a lot them. Were the exact same conditions used on each test, to include exact height and angle, location, same substrate, atmospheric conditions, elevation etc. was shooter error factored in, ammo reliability etc. now the guy did a pretty good job with all that, and I trust we had a lot of the same conditions even if not exact. The test would be a lot more valid in my mind of each test is done say 10 times
You are conflating two different things. Dismissing a single failure as a sample of one (what I am discussing), and wishing for better data.While true if your goal is to find or know if there is/was a failure. However the rate of failure is vastly more important then if something has failed, as we know all scopes fail already.
No, you are conflating relevance and statistically significant. In the lack of better data, even poor data is relevant, but we must acknowledge its weekness. Statistically significant data is what you get with a large enough sample.Some good info on this page but you have to understand what you are looking at. Even the drop test that people are taking as gold, have a lot to be desired. For one the test would really need to be repeated the same way for each scope x amount of times to make the test relevant.
10? Unless the failure rate is very high, 10 would not get one to statistical significance, so it would be providing a prettier veneer to still statistically invalid data.Yeah I agree with most of that . And yes no pro but we hunt mid long range and keep growing.
And all this stuff is a great indicator. However we already know that any scope ever made can fail one way or another. What we really need to know is rate of failure by how much and we need to know that by which brands we are comparing. Which will never get.
The drop test here are interesting and have peeked my interest. But as skeptic in everything I read, I had a lot of questions after reading through a lot them. Were the exact same conditions used on each test, to include exact height and angle, location, same substrate, atmospheric conditions, elevation etc. was shooter error factored in, ammo reliability etc. now the guy did a pretty good job with all that, and I trust we had a lot of the same conditions even if not exact. The test would be a lot more valid in my mind of each test is done say 10 times
This was pretty well covered by someone else.
but yea. Remove sponsorships. Remove prize table scopes. Then the "data" starts to look different.
Then you are showing you have not been to many or any PRS matches as those guys re zero before every match. Isnt the definition of having to re zero the fact that the scope lost zero at some point.
If you can’t decipher that there is a large difference in use case between a PRS shooter and back country backpack style hunter, then you don’t have a large grasp on how either of those things work.So if competition shooters make their living by winning (per a previous poster a few posts back), and the scopes are such crap, then why would they continue to use them sponsored or not?
Marketing loves to brag about winning and if you aren’t winning, you’ll be dumped like a hot potato by the company sponsoring you so I don’t see how this argument holds water.
Sent from my iPad using Tapatalk Pro
Because the way the scopes are "crap" is irrelevant to PRS shooters. If their zero gets bumped during travel or whatever it doesn't matter since they're always able to re-zero before the match starts. And then once the match starts unless something goes fairly wrong their scope won't be bumped again.So if competition shooters make their living by winning (per a previous poster a few posts back), and the scopes are such crap, then why would they continue to use them sponsored or not?
So if competition shooters make their living by winning (per a previous poster a few posts back), and the scopes are such crap, then why would they continue to use them sponsored or not?
Marketing loves to brag about winning and if you aren’t winning, you’ll be dumped like a hot potato by the company sponsoring you so I don’t see how this argument holds water.
Sent from my iPad using Tapatalk Pro