Zeiss durability vs NF, Trijicon?

plebe

Lil-Rokslider
Joined
Jan 15, 2021
Messages
266
It’s almost like it would be helpful if there were a forum for the scope evals that had answers to questions.

If one were to look at said forum, how many scopes have been tested, and what could be concluded from that? Certainly you can provide a link for easy access and members can do the research. Draw your own conclusions folks.

Honestly, I put more stock in your experience as an instructor since the scope of testing isn’t very substantial so far as I’m concerned. Not saying it isn’t good, or ok, or something worthwhile. Just not substantial.

I put stock in the experience of other instructors too.

And end users.

But 1/1 testing isn’t statistically that telling,
 

plebe

Lil-Rokslider
Joined
Jan 15, 2021
Messages
266
If one were to look at said forum, how many scopes have been tested, and what could be concluded from that? Certainly you can provide a link for easy access and members can do the research. Draw your own conclusions folks.

Honestly, I put more stock in your experience as an instructor since the scope of testing isn’t very substantial so far as I’m concerned. Not saying it isn’t good, or ok, or something worthwhile. Just not substantial.

I put stock in the experience of other instructors too.

And end users.

But 1/1 testing isn’t statistically that telling,

Eureka!

 

flatelk

FNG
Joined
Jun 20, 2023
Messages
12
I use a Zeiss V4 for mountain hunting. On my last elk hunt I slipped and fell on a talus slope. A rock took a gouge right out of the elevation turret. It held zero and still tracks fine. That's good enough reliability for me.
 

Schmo

WKR
Classified Approved
Joined
Apr 29, 2023
Messages
953
What if a Nightforce scope failed? Folks have reported it. How many scopes have even been drop tested?
Nightforce scopes have failed! Any man made item can and will fail. But it’s a lot less likely with a NF or Trij than some others. Nothing is fail-proof, but whatever has the highest odds of holding up, I’m using that.
 

plebe

Lil-Rokslider
Joined
Jan 15, 2021
Messages
266
Nightforce scopes have failed! Any man made item can and will fail. But it’s a lot less likely with a NF or Trij than some others. Nothing is fail-proof, but whatever has the highest odds of holding up, I’m using that.

I’ve nothing at all against Nightforce. They have good QA and a great track record in tough environments. Just named them to reiterate, it’s a matter of maths.

Should I choose from a list of scopes based on results of single tests of individual scope models, and ignore a more robust information pool? I’m not inclined to.

I’m sure everyone involved with the scope testing would love to be able to do more, but time and money are barriers so they are doing what they can. I’m not suggesting the work and the data is bad, but it cannot be considered encompassing either. It needs a broader context to confirm, refute, or beget further investigation of generalizations formed more broadly. Imo.

Are your conclusions about NF and Trijicon solely based on the Rokslide scope tests?
 

Schmo

WKR
Classified Approved
Joined
Apr 29, 2023
Messages
953
I would say largely influenced by the RS drop tests, but maybe not entirely. Nightforce has a reputation pretty much everywhere as being tough and reliable. On the Trijicon, I’m speaking specifically to the Tenmile line. Form has seen multiple of the Tenmile hold up, not just one. I’ve also done research outside of RS on the Tenmile. Basically all info I found was highly positive regarding durability and tracking. I’m not speaking to reticles, tight eye boxes, or anything like that. Speaking specifically to durability in holding zero, dialing correctly, and returning to zero.
 

Schmo

WKR
Classified Approved
Joined
Apr 29, 2023
Messages
953
Listen to Forms podcast that he did about scopes that have passed the test. It’s on the Shoot2Hunt podcast. Some really good nuggets of info there.
 

plebe

Lil-Rokslider
Joined
Jan 15, 2021
Messages
266
If Form had to reply to all of the repeat arguments, in every thread in which they are brought up, he wouldn't have time to do the drop tests.

Seems like you have time. Just link the info for the person inquiring here and do everyone the favor.
 

fwafwow

WKR
Joined
Apr 8, 2018
Messages
5,541
Seems like you have time. Just link the info for the person inquiring here and do everyone the favor.
Yep - and I clearly spend too much time on here, in these 223 and scope argument threads. I'm sure others can find more and/or better posts to convey the counter to a sample size of one not being statistically valid.



 

plebe

Lil-Rokslider
Joined
Jan 15, 2021
Messages
266
So, to those of you who say the drop tests are not a big enough sample. I understand sample sizes. However, if a certain scope passes the drop test, it is far more likely that that will be how it typically is for that scope model, than to say that it was a fluke and that was just a good one off. If you’re making a poorly designed scope, your chances are very minuscule that a sample will pass being dropped.

This isn’t magic. One scope passing can’t be stated to mean a whole lot. Two working starts to point to an indication.
 

fwafwow

WKR
Joined
Apr 8, 2018
Messages
5,541
If we are arguing, we are arguing past each other. So maybe we are saying the same thing. A pass on one has some but limited value, but a failure of one has real value. And if I read too quickly and missed you saying as much, my apologies.

I knew that this had come up recently but didn't remember until now the comparison was made to car crash tests (and planes) -

 

Schmo

WKR
Classified Approved
Joined
Apr 29, 2023
Messages
953
@plebe,
Your quote of Forms is correct. He’s tested NF SHV, NXS, NX8, and ATACR. At least 4 samples right there. He’s also seen multiple Tenmiles hold up. So even that agrees with your quote of him.
 

plebe

Lil-Rokslider
Joined
Jan 15, 2021
Messages
266
If we are arguing, we are arguing past each other. So maybe we are saying the same thing. A pass on one has some but limited value, but a failure of one has real value. And if I read too quickly and missed you saying as much, my apologies.

I knew that this had come up recently but didn't remember until now the comparison was made to car crash tests (and planes) -


I wouldn’t put my eggs in one basket either way.

Let’s say you purchase a scope that passed muster in it’s Rokslide testing, but yours doesn’t track or hold zero, whatever.

It’s a sample set of two, with opposing results, one of which is a failure. Which result do you weigh more heavily then?

I get that it’s a particularly unlucky scenario.

Thankfully, there’s more information on scopes to be had in order to make informed decisions.

Form’s testing is pretty hard on optics. I might be inclined to give a scope that passed his testing another shot even if one failed on me. But that runs contrary to:

For me I wouldn’t use a scope that “FAILS” on a squirrel hunt, or waste a single dollar buying one- they are complete junk.

I also wouldn’t rule a scope out based solely on a single failure in the Rokslide tests, necessarily. But it necessitates caution, sure.

If we rule out every scope that has failed we probably end up with no scopes.

Form says…

Taking the results is up to each person

I’ve already expressed how I look at them:

The tests are a data point. I wouldn’t discount them, but it’s not proof positive either
But 1/1 testing isn’t statistically that telling,

I don’t know that I’ve been much contrarian with anything I’ve said…

To see an average relatively small difference between items large sample sizes are needed. For instance, to see the difference in long term zero retention between Nightforce’s SFP NX8 4-32x and FFP NX8 4-32x’s you might need 20-30 samples (or more), however to see if a scope design will hold zero through a 20” drop on a golf course- you only need a couple, and the very first one gives a decent indication of what you will find. 2x of the same model failing to hold zero is almost a certainty that there is an issue, and 3x it IS an issue. Just like if a 30mph rear end causes the motor to be pushed through the front seats into the rear seats- that’s a design issue and testing 10 more is a waste of time.
 

Schmo

WKR
Classified Approved
Joined
Apr 29, 2023
Messages
953
I completely agree with the last comment of Forms that you quoted.
 

SDHNTR

WKR
Joined
Aug 30, 2012
Messages
7,078
I’ve nothing at all against Nightforce. They have good QA and a great track record in tough environments. Just named them to reiterate, it’s a matter of maths.

Should I choose from a list of scopes based on results of single tests of individual scope models, and ignore a more robust information pool? I’m not inclined to.

I’m sure everyone involved with the scope testing would love to be able to do more, but time and money are barriers so they are doing what they can. I’m not suggesting the work and the data is bad, but it cannot be considered encompassing either. It needs a broader context to confirm, refute, or beget further investigation of generalizations formed more broadly. Imo.

Are your conclusions about NF and Trijicon solely based on the Rokslide scope tests?
Where is there a “more robust information pool”?
 

Macintosh

WKR
Joined
Feb 17, 2018
Messages
2,748
X2...I am interested in a more robust information pool as well. I havent seen one. Are you referring to the general consensus of people simply saying "my scope has held zero, I haven't had a problem"? If so, I'd say its not "information", let alone robust, until it is documented that all those scopes actually had a verifiable zero and that was quantifiably tracked over time with no adjustments--I see virtually zero shooters doing either so I discount anecdotal statements like this until it's shown visually. I want to SEE that it held zero, because I dont care who it is, I don't believe that 99% of the people saying so actually zero or track their zero well enough to say so. This is a big part of my personal reason why I give these eval's the benefit of any doubt...sure, I see some flaws in the methodology, the flaws may be relevant, they may not be--but what alternative is better?

FWIW, I'd point people to the maven 1.2 threads...there's 2 "official" drop-eval'd scopes, plus a solid handful (5 or 6?) of other people performing their own drop eval based on the same parameters, all of which passed. Not all the evaluated scopes have this--but several have at least a handful of separate confirmations. With a second production run just landing, I bet we see a few more as well. With that scope and a couple others that have eval'd well falling into the size, price and functionality requirements I have, I see no advantage to being a skeptic.
 
Last edited:

plebe

Lil-Rokslider
Joined
Jan 15, 2021
Messages
266
X2...I am interested in a more robust information pool as well. I havent seen one. Are you referring to the general consensus of people simply saying "my scope has held zero, I haven't had a problem"? If so, I'd say its not "information", let alone robust, until it is documented that all those scopes actually had a verifiable zero and that was quantifiably tracked over time with no adjustments--I see virtually zero shooters doing either so I discount anecdotal statements like this until it's shown visually. I want to SEE that it held zero, because I dont care who it is, I don't believe that 99% of the people saying so actually zero or track their zero well enough to say so. This is a big part of my personal reason why I give these eval's the benefit of any doubt...sure, I see some flaws in the methodology, the flaws may be relevant, they may not be--but what alternative is better?

FWIW, I'd point people to the maven 1.2 threads...there's 2 "official" drop-eval'd scopes, plus a solid handful (5 or 6?) of other people performing their own drop eval based on the same parameters, all of which passed. Not all the evaluated scopes have this--but several have at least a handful of separate confirmations. With a second production run just landing, I bet we see a few more as well. With that scope and a couple others that have eval'd well falling into the size, price and functionality requirements I have, I see no advantage to being a skeptic.

It’s a fair point. It’s your vote of confidence and if you only trust one candidate that’s the scope of the matter. I mentioned some things probably a page ago. I forget we all have different exposure.

Whereas scopes are being tested more than 1 offs, and results align, with this level of testing, I think it’s extremely telling. It’s when a single scope is the judge and jury, I’ll look for more.

Nightforce/Trijicon have long-standing reputations for durability and the basis for that isn’t single scope trials at Rokslide.
 
Top