Drop Test Pass/Fail

Hasent there only been like 5 or 6 scopes that have passed?

Sent from my SM-G990U2 using Tapatalk
 
I agree a quick reference would be helpful. I realized I missed a few scopes that passed from JFK’s spreadsheet on page 1.
 
Pass:
Trijicon Tenmile 3-18 ffp
Leupold Ultra M3A 10x (1980's)
Minox ZP5 5-25 THLR
SWFA SS 6x
Nightforce NX8 4-32
Schmidt & Bender Klassik 8x56mm
Nightforce ATACR 4-16
Edit: Trijicon Credo HX 2.5-15

Fail:
Athlon Helos BTR 2-12
Tract Toric Ultra HD 3-15
Leupold Mark 5
SWFA SS UL 2.5-10
Riton 3-18
Zeiss LRP S5 3-18
Vortex Razor HD LHT 4.5-22
Trijicon Credo HX 2.5-15 Oops
Maven RS.5 4-24

Source: I scrolled to the bottom of each test thread

It was last year that I wrote this down and copy-pasted it, so there might be new threads since them
Edit: Dang, there's a bunch of new ones
 
Last edited:
Pass:
Trijicon Tenmile 3-18 ffp
Leupold Ultra M3A 10x (1980's)
Minox ZP5 5-25 THLR
SWFA SS 6x
Nightforce NX8 4-32
Schmidt & Bender Klassik 8x56mm
Nightforce ATACR 4-16

Fail:
Athlon Helos BTR 2-12
Tract Toric Ultra HD 3-15
Leupold Mark 5
SWFA SS UL 2.5-10
Riton 3-18
Zeiss LRP S5 3-18
Vortex Razor HD LHT 4.5-22
Trijicon Credo HX 2.5-15
Maven RS.5 4-24

Source: I scrolled to the bottom of each test thread

It was last year that I wrote this down and copy-pasted it, so there might be new threads since them
This is not correct in at least one case--you have the trij credo 2.5-15 failing, when in fact it passed--I just went and re-checked this to verify. The initial shift was determined to be scope shifting in rings, and the second scope passed the full drop eval and 3000round drive around. The last target pictured says it's 3moa ammo on what appears to be a 1.5" or 2" dot.
The scope held zero through the entire drop portion without a hiccup....(at end of test) Not many updates as there has really been nothing to report- scope has worked fine. It is over 3,000 rounds. If my count is correct it is at 3,577 rounds- with a few odds and ends thrown in that weren’t counted.


I think the way these are written it's easy to make a mistake like this, which is part of why I think any "synopsis" of pass/fail models has to include info on exactly WHAT part of the eval it failed. Only providing one "pass" or "fail" also groups scopes together that literally broke into multiple pieces and failed at even the most basic parts of tracking before any drops, with scopes that passed all but the 3x36" drops--that's not the same thing. I think any synopsis has to include at least minimal detailed info such as "basic zero/track/RTZ: pass, 18" drops: pass, 1x36" drops: pass, 3x36" drops: pass, 3000rounds/3000miles: pass". Then as more samples are tested you start to have info to differentiate between outright junk, OK, pretty good, and PD solid.
It also hopefully cuts through some of the bs around unfinished reviews like the ZCO thats apparently waiting for rings—its a lot easier to misinterpret “the scope failed the drop eval. Whether it is the scope or rings remains to be seen than it is3x36 drops: ongoing
 
Last edited:
Fixed
There's also a Trijicon Credo 3-9 that partially passed, that might be where I got messed up.

Only providing one "pass" or "fail" also groups scopes together that literally broke into multiple pieces and failed at even the most basic parts of tracking before any drops, with scopes that passed all but the 3x36" drops--that's not the same thing. I think any synopsis has to include at least minimal detailed info such as "basic zero/track/RTZ: pass, 18" drops: pass, 1x36" drops: pass, 3x36" drops: pass, 3000rounds/3000miles: pass".
Agreed.
I did "The drop test" with an ACOG TA31 and had no shift on the 3x18" and 3x36" drops, then shifted 7" on the 9x36" drop.
That might count as a Fail, but I'm not exactly running to the classifieds to get rid of it.
 
Does the maven rs1.2 need added to pass list?
Please correct me but I thought the meopta optika6 3-18 passed the drops but had a tracking failure after extended use.
If someone has an updated list, please share
 
This is what I was describing. If there is any interest in it then I can send it to whomever wants to post it and manage it.

It’s not perfect. Some of the reviews were one and done while others described issues in subsequent use. “Partial pass” or “partial fail” is a fail on this sheet. Tried to pull out the key failure point but translating pages of narrative to a spreadsheet isn’t always the easiest thing to do. My thought was that it would serve as a quick reference that was updated as more scopes are evaluated.
Thank you for the clear list!
 
I know this is a tough question to answer without specific testing, but how safe do you guys think it is to apply a successful test on manufacturer and model to a different magnification? For example, Form had a passed test on the Trijicon Tenmile 3-18x44 and I am looking at the 4.5-30x56. This question applies to other passed scopes as well.
 
I know this is a tough question to answer without specific testing, but how safe do you guys think it is to apply a successful test on manufacturer and model to a different magnification? For example, Form had a passed test on the Trijicon Tenmile 3-18x44 and I am looking at the 4.5-30x56. This question applies to other passed scopes as well.
That answer will depend on the manufacturer in question, so cannot be applied as a blanket statement. If it’s Maven, No. Form will have to confirm on this one, but my feeling is that with the Tenmile lineup you’d be good.
 
That said, I’d probably trust a manufacturer at least making one passing model before I tried one that didn’t have a pass on any model.
 
This is what I was describing. If there is any interest in it then I can send it to whomever wants to post it and manage it.

It’s not perfect. Some of the reviews were one and done while others described issues in subsequent use. “Partial pass” or “partial fail” is a fail on this sheet. Tried to pull out the key failure point but translating pages of narrative to a spreadsheet isn’t always the easiest thing to do. My thought was that it would serve as a quick reference that was updated as more scopes are evaluated.
Sheeples much….? This is interesting data, but you can hardly draw any conclusions from this.

How many scopes were tested for each make and model? This is a biased sample size and nothing conclusive can be drawn from a spread sheet with pass or fail.

Once again, this data is useless with such a small sample size. You can’t test 1, 2 or even 10 scopes and say that it is representative of the entire population….you need hundreds if not thousands of samples for each scope. People should definitely be testing this stuff for themselves and not believing everything they read just because someone tested a few scopes.
 
Sheeples much….? This is interesting data, but you can hardly draw any conclusions from this.

How many scopes were tested for each make and model? This is a biased sample size and nothing conclusive can be drawn from a spread sheet with pass or fail.

Once again, this data is useless with such a small sample size. You can’t test 1, 2 or even 10 scopes and say that it is representative of the entire population….you need hundreds if not thousands of samples for each scope. People should definitely be testing this stuff for themselves and not believing everything they read just because someone tested a few scopes.

You don’t understand statistical relevance nor sample size for total failure, do you?


If Boeing’s next airplane prototype shatters into pieces and kills everyone inside on its first landing- do you need “thousands” of samples of that plane to say that something is wrong with the design?
 
You don’t understand statistical relevance nor sample size for total failure, do you?


If Boeing’s next airplane prototype shatters into pieces and kills everyone inside on its first landing- do you need “thousands” of samples of that plane to say that something is wrong with the design?
Wow that’s a little extreme and you’re comparing apples to oranges. We aren’t talking about planes crashing

1.
One catastrophic failure can be meaningful—but it depends on context.
  • If the circumstances of failure are well-understood and clearly linked to a design flaw, then yes—one failure can reveal a critical issue.
    Example: A structural wing failure on a test flight due to design, not pilot error or extreme weather.
  • But if the cause is unclear, or the failure could be due to user error, outlier conditions, or even a fluke, then you do need more data before making strong claims about the overall design.
2.
Catastrophic outcomes demand higher scrutiny, not necessarily conclusions.

You can absolutely justify immediate concern or action after one serious failure, but whether you declare the entire system flawed depends on whether you’ve isolated the root cause.
  • In your Boeing example, even a single crash would ground the prototype pending investigation.
  • But whether the entire plane design is scrapped would depend on the outcome of that investigation, not just the crash itself.
3.
Practical vs. statistical significance.

You might be saying: “I don’t need a statistical study to trust my gut when something fails spectacularly.” That’s totally valid from a consumer or reviewer standpoint. If a piece of gear breaks badly on its first trip—especially under normal use—you’re justified in being skeptical and even in warning others.

But from an engineering or scientific standpoint, people would want repeatability and causal clarity before calling the design a failure across the board.

Just saying…that from a data standpoint you can’t draw anything conclusive from this.
 
I have a couple suggestions that I would be willing to help with.

1. Enter the data in a google sheet and save to google drive. This would allow us to share a link to the spreadsheet so updates will be live.

2. Add a column that contains a link to the actual test(s) on the forum.

Forgive me if these suggestions have been made or already implemented.
 
Wow that’s a little extreme and you’re comparing apples to oranges. We aren’t talking about planes crashing

1.
One catastrophic failure can be meaningful—but it depends on context.
  • If the circumstances of failure are well-understood and clearly linked to a design flaw, then yes—one failure can reveal a critical issue.
    Example: A structural wing failure on a test flight due to design, not pilot error or extreme weather.
  • But if the cause is unclear, or the failure could be due to user error, outlier conditions, or even a fluke, then you do need more data before making strong claims about the overall design.
2.
Catastrophic outcomes demand higher scrutiny, not necessarily conclusions.

You can absolutely justify immediate concern or action after one serious failure, but whether you declare the entire system flawed depends on whether you’ve isolated the root cause.
  • In your Boeing example, even a single crash would ground the prototype pending investigation.
  • But whether the entire plane design is scrapped would depend on the outcome of that investigation, not just the crash itself.
3.
Practical vs. statistical significance.

You might be saying: “I don’t need a statistical study to trust my gut when something fails spectacularly.” That’s totally valid from a consumer or reviewer standpoint. If a piece of gear breaks badly on its first trip—especially under normal use—you’re justified in being skeptical and even in warning others.

But from an engineering or scientific standpoint, people would want repeatability and causal clarity before calling the design a failure across the board.

Just saying…that from a data standpoint you can’t draw anything conclusive from this.
While I agree with you that unequivocal proof of statistical failure is only seen through large sample sizes, a failure of one randomly selected sample is still very telling.

It is rather unlikely, given the odds, to select the .00001% of my favorite scope brand's failures on the first try. It is literally like winning the lottery. Now hitting the jackpot twice, or a sample size of 2, REALLY seems unlikely.

So which scope did you think warranted a 2nd or 3rd lottery "win"?
 
Sheeples much….? This is interesting data, but you can hardly draw any conclusions from this.

How many scopes were tested for each make and model? This is a biased sample size and nothing conclusive can be drawn from a spread sheet with pass or fail.

Once again, this data is useless with such a small sample size. You can’t test 1, 2 or even 10 scopes and say that it is representative of the entire population….you need hundreds if not thousands of samples for each scope. People should definitely be testing this stuff for themselves and not believing everything they read just because someone tested a few scopes.
This argument has been made, and addressed, on many occasions.
 
While I agree with you that unequivocal proof of statistical failure is only seen through large sample sizes, a failure of one randomly selected sample is still very telling.

It is rather unlikely, given the odds, to select the .00001% of my favorite scope brand's failures on the first try. It is literally like winning the lottery. Now hitting the jackpot twice, or a sample size of 2, REALLY seems unlikely.

So which scope did you think warranted a 2nd or 3rd lottery "win"?
This is how I feel about these tests. No one is saying that every scope will pass or fail based on the testing of one. But if the failure rate is low then it is unlikely that the one you test will fail. Probabilities.
 
Wow that’s a little extreme and you’re comparing apples to oranges. We aren’t talking about planes crashing

1.
One catastrophic failure can be meaningful—but it depends on context.
  • If the circumstances of failure are well-understood and clearly linked to a design flaw, then yes—one failure can reveal a critical issue.
    Example: A structural wing failure on a test flight due to design, not pilot error or extreme weather.
  • But if the cause is unclear, or the failure could be due to user error, outlier conditions, or even a fluke, then you do need more data before making strong claims about the overall design.
2.
Catastrophic outcomes demand higher scrutiny, not necessarily conclusions.

You can absolutely justify immediate concern or action after one serious failure, but whether you declare the entire system flawed depends on whether you’ve isolated the root cause.
  • In your Boeing example, even a single crash would ground the prototype pending investigation.
  • But whether the entire plane design is scrapped would depend on the outcome of that investigation, not just the crash itself.
3.
Practical vs. statistical significance.

You might be saying: “I don’t need a statistical study to trust my gut when something fails spectacularly.” That’s totally valid from a consumer or reviewer standpoint. If a piece of gear breaks badly on its first trip—especially under normal use—you’re justified in being skeptical and even in warning others.

But from an engineering or scientific standpoint, people would want repeatability and causal clarity before calling the design a failure across the board.

Just saying…that from a data standpoint you can’t draw anything conclusive from this.
*sigh* new guy joins and makes the same arguments as every other new guy.
 
Back
Top