Forrest.billy
FNG
More of a blog format with a running spread sheet at the top with a small synopsis of every scope dropped searchable by brand with tags
Did this document ever get created and kept up to date that can be referenced?
Thanks!
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
More of a blog format with a running spread sheet at the top with a small synopsis of every scope dropped searchable by brand with tags
This is not correct in at least one case--you have the trij credo 2.5-15 failing, when in fact it passed--I just went and re-checked this to verify. The initial shift was determined to be scope shifting in rings, and the second scope passed the full drop eval and 3000round drive around. The last target pictured says it's 3moa ammo on what appears to be a 1.5" or 2" dot.Pass:
Trijicon Tenmile 3-18 ffp
Leupold Ultra M3A 10x (1980's)
Minox ZP5 5-25 THLR
SWFA SS 6x
Nightforce NX8 4-32
Schmidt & Bender Klassik 8x56mm
Nightforce ATACR 4-16
Fail:
Athlon Helos BTR 2-12
Tract Toric Ultra HD 3-15
Leupold Mark 5
SWFA SS UL 2.5-10
Riton 3-18
Zeiss LRP S5 3-18
Vortex Razor HD LHT 4.5-22
Trijicon Credo HX 2.5-15
Maven RS.5 4-24
Source: I scrolled to the bottom of each test thread
It was last year that I wrote this down and copy-pasted it, so there might be new threads since them
The scope held zero through the entire drop portion without a hiccup....(at end of test) Not many updates as there has really been nothing to report- scope has worked fine. It is over 3,000 rounds. If my count is correct it is at 3,577 rounds- with a few odds and ends thrown in that weren’t counted.
Agreed.Only providing one "pass" or "fail" also groups scopes together that literally broke into multiple pieces and failed at even the most basic parts of tracking before any drops, with scopes that passed all but the 3x36" drops--that's not the same thing. I think any synopsis has to include at least minimal detailed info such as "basic zero/track/RTZ: pass, 18" drops: pass, 1x36" drops: pass, 3x36" drops: pass, 3000rounds/3000miles: pass".
Thank you for the clear list!This is what I was describing. If there is any interest in it then I can send it to whomever wants to post it and manage it.
It’s not perfect. Some of the reviews were one and done while others described issues in subsequent use. “Partial pass” or “partial fail” is a fail on this sheet. Tried to pull out the key failure point but translating pages of narrative to a spreadsheet isn’t always the easiest thing to do. My thought was that it would serve as a quick reference that was updated as more scopes are evaluated.
That answer will depend on the manufacturer in question, so cannot be applied as a blanket statement. If it’s Maven, No. Form will have to confirm on this one, but my feeling is that with the Tenmile lineup you’d be good.I know this is a tough question to answer without specific testing, but how safe do you guys think it is to apply a successful test on manufacturer and model to a different magnification? For example, Form had a passed test on the Trijicon Tenmile 3-18x44 and I am looking at the 4.5-30x56. This question applies to other passed scopes as well.
Sheeples much….? This is interesting data, but you can hardly draw any conclusions from this.This is what I was describing. If there is any interest in it then I can send it to whomever wants to post it and manage it.
It’s not perfect. Some of the reviews were one and done while others described issues in subsequent use. “Partial pass” or “partial fail” is a fail on this sheet. Tried to pull out the key failure point but translating pages of narrative to a spreadsheet isn’t always the easiest thing to do. My thought was that it would serve as a quick reference that was updated as more scopes are evaluated.
Sheeples much….? This is interesting data, but you can hardly draw any conclusions from this.
How many scopes were tested for each make and model? This is a biased sample size and nothing conclusive can be drawn from a spread sheet with pass or fail.
Once again, this data is useless with such a small sample size. You can’t test 1, 2 or even 10 scopes and say that it is representative of the entire population….you need hundreds if not thousands of samples for each scope. People should definitely be testing this stuff for themselves and not believing everything they read just because someone tested a few scopes.
Wow that’s a little extreme and you’re comparing apples to oranges. We aren’t talking about planes crashingYou don’t understand statistical relevance nor sample size for total failure, do you?
If Boeing’s next airplane prototype shatters into pieces and kills everyone inside on its first landing- do you need “thousands” of samples of that plane to say that something is wrong with the design?
While I agree with you that unequivocal proof of statistical failure is only seen through large sample sizes, a failure of one randomly selected sample is still very telling.Wow that’s a little extreme and you’re comparing apples to oranges. We aren’t talking about planes crashing
1.
One catastrophic failure can be meaningful—but it depends on context.
2.
- If the circumstances of failure are well-understood and clearly linked to a design flaw, then yes—one failure can reveal a critical issue.
Example: A structural wing failure on a test flight due to design, not pilot error or extreme weather.- But if the cause is unclear, or the failure could be due to user error, outlier conditions, or even a fluke, then you do need more data before making strong claims about the overall design.
Catastrophic outcomes demand higher scrutiny, not necessarily conclusions.
You can absolutely justify immediate concern or action after one serious failure, but whether you declare the entire system flawed depends on whether you’ve isolated the root cause.
3.
- In your Boeing example, even a single crash would ground the prototype pending investigation.
- But whether the entire plane design is scrapped would depend on the outcome of that investigation, not just the crash itself.
Practical vs. statistical significance.
You might be saying: “I don’t need a statistical study to trust my gut when something fails spectacularly.” That’s totally valid from a consumer or reviewer standpoint. If a piece of gear breaks badly on its first trip—especially under normal use—you’re justified in being skeptical and even in warning others.
But from an engineering or scientific standpoint, people would want repeatability and causal clarity before calling the design a failure across the board.
Just saying…that from a data standpoint you can’t draw anything conclusive from this.
This argument has been made, and addressed, on many occasions.Sheeples much….? This is interesting data, but you can hardly draw any conclusions from this.
How many scopes were tested for each make and model? This is a biased sample size and nothing conclusive can be drawn from a spread sheet with pass or fail.
Once again, this data is useless with such a small sample size. You can’t test 1, 2 or even 10 scopes and say that it is representative of the entire population….you need hundreds if not thousands of samples for each scope. People should definitely be testing this stuff for themselves and not believing everything they read just because someone tested a few scopes.
This is how I feel about these tests. No one is saying that every scope will pass or fail based on the testing of one. But if the failure rate is low then it is unlikely that the one you test will fail. Probabilities.While I agree with you that unequivocal proof of statistical failure is only seen through large sample sizes, a failure of one randomly selected sample is still very telling.
It is rather unlikely, given the odds, to select the .00001% of my favorite scope brand's failures on the first try. It is literally like winning the lottery. Now hitting the jackpot twice, or a sample size of 2, REALLY seems unlikely.
So which scope did you think warranted a 2nd or 3rd lottery "win"?
*sigh* new guy joins and makes the same arguments as every other new guy.Wow that’s a little extreme and you’re comparing apples to oranges. We aren’t talking about planes crashing
1.
One catastrophic failure can be meaningful—but it depends on context.
2.
- If the circumstances of failure are well-understood and clearly linked to a design flaw, then yes—one failure can reveal a critical issue.
Example: A structural wing failure on a test flight due to design, not pilot error or extreme weather.- But if the cause is unclear, or the failure could be due to user error, outlier conditions, or even a fluke, then you do need more data before making strong claims about the overall design.
Catastrophic outcomes demand higher scrutiny, not necessarily conclusions.
You can absolutely justify immediate concern or action after one serious failure, but whether you declare the entire system flawed depends on whether you’ve isolated the root cause.
3.
- In your Boeing example, even a single crash would ground the prototype pending investigation.
- But whether the entire plane design is scrapped would depend on the outcome of that investigation, not just the crash itself.
Practical vs. statistical significance.
You might be saying: “I don’t need a statistical study to trust my gut when something fails spectacularly.” That’s totally valid from a consumer or reviewer standpoint. If a piece of gear breaks badly on its first trip—especially under normal use—you’re justified in being skeptical and even in warning others.
But from an engineering or scientific standpoint, people would want repeatability and causal clarity before calling the design a failure across the board.
Just saying…that from a data standpoint you can’t draw anything conclusive from this.