Statistical Analysis of the Rokslide Drop Tests

Bado20

FNG
Joined
Nov 12, 2021
Messages
91
Location
British Columbia
I was curious about seeing if their is a statistically significant difference in drop test performance between certain brands and the overall market. I conducted an analysis of the drop testing data and figured I would share my findings with everyone on here.

TLDR: Trijicon, SWFA, and Nightforce perform better than the broader market at statistically significant levels.

1.jpg
2.jpg
3.jpg
4.jpg
5.jpg
6.jpg
7.jpg

8.jpg
9.jpg
10.jpg
 
I think the Maven RS1.2 results should be separated, as it's a known outlier.
You should only remove outliers if you suspect that it is from poor testing or bad sampling. Unless you think Form didn’t do the test the same way with the RS1.2 or that Maven sent a cherry-picked unit the data is valid.

I was surprised to find Maven in bottom 5 but I guess whatever they’re doing with the RS1.2 hasn’t translated across all their lines yet.

Realistically Maven making a solid scope in the RS1.2 but struggling with drop test durability overall isn’t any different than Leupold making a bunch of optics that perform poorly while the 10x Ultra M3A appears solid.
 
You should only remove outliers if you suspect that it is from poor testing or bad sampling. Unless you think Form didn’t do the test the same way with the RS1.2 or that Maven sent a cherry-picked unit the data is valid.

I was surprised to find Maven in bottom 5 but I guess whatever they’re doing with the RS1.2 hasn’t translated across all their lines yet.

Realistically Maven making a solid scope in the RS1.2 but struggling with drop test durability overall isn’t any different than Leupold making a bunch of optics that perform poorly while the 10x Ultra M3A appears solid.


For most of the companies- especially ones getting their scopes made by OEM’s, you tend to need to view each one on its own and not as a “brand”. Because the brand is mostly just a label having little to do with the actual scope in most instances.
 
For most of the companies- especially ones getting their scopes made by OEM’s, you tend to need to view each one on its own and not as a “brand”. Because the brand is mostly just a label having little to do with the actual scope in most instances.
This may be true for most companies but it is clearly not true for all of them, the results show that at least SWFA and Trijicon, and likely NF as well are doing something across their lines that is causing them to do better on these drop tests.

As far as comparing a specific model, it would be a useful comparison but there would need to be test data for at least 4 examples of that model to have a chance at any meaningful inference.
 
You should only remove outliers if you suspect that it is from poor testing or bad sampling. Unless you think Form didn’t do the test the same way with the RS1.2 or that Maven sent a cherry-picked unit the data is valid.

I was surprised to find Maven in bottom 5 but I guess whatever they’re doing with the RS1.2 hasn’t translated across all their lines yet.

Realistically Maven making a solid scope in the RS1.2 but struggling with drop test durability overall isn’t any different than Leupold making a bunch of optics that perform poorly while the 10x Ultra M3A appears solid.

There's strong correlation in what OEM is actually producing the optic, than the brand stamped on it. This should apply to the RS1.2, RS6, Bushnell LRHS/LRTS/LRHS2, NF, etc. The car battery analogy applies to good rifle scopes.

Like this guy says:
For most of the companies- especially ones getting their scopes made by OEM’s, you tend to need to view each one on its own and not as a “brand”. Because the brand is mostly just a label having little to do with the actual scope in most instances.
 
This may be true for most companies but it is clearly not true for all of them, the results show that at least SWFA and Trijicon, and likely NF as well are doing something across their lines that is causing them to do better on these drop tests.

As far as comparing a specific model, it would be a useful comparison but there would need to be test data for at least 4 examples of that model to have a chance at any meaningful inference.
No.

In general. You cannot base your data dump on “brands”. They don’t make anything and saying this “brand” scope is or is not reliable is not the way.

It must be model specific within the design houses.
 
No.

In general. You cannot base your data dump on “brands”. They don’t make anything and saying this “brand” scope is or is not reliable is not the way.

It must be model specific within the design houses.
I think he gets that point.

His point is you can’t get statistical significance with one scope here and one scope there.
 
No.

In general. You cannot base your data dump on “brands”. They don’t make anything and saying this “brand” scope is or is not reliable is not the way.

It must be model specific within the design houses.
I understand that looking at specific models would be ideal, at this point there isn't enough data to do that.

Also, some of the "brands" do perform better. I don't think any of us are stupid enough to think it is the logo on the turret that is causing them to do so. Obviously brand is a proxy for something else, be it a design team, OEM, whatever. But you don't get a 0.0003 p-value on something by random chance.
 
I understand that looking at specific models would be ideal, at this point there isn't enough data to do that.

Also, some of the "brands" do perform better. I don't think any of us are stupid enough to think it is the logo on the turret that is causing them to do so. Obviously brand is a proxy for something else, be it a design team, OEM, whatever. But you don't get a 0.0003 p-value on something by random chance.

at what point do you just use the actual manufacturer facility location and not brand then? There is already I high degree of correlation there…

How many different factories/mfg does vortex use?
 
at what point do you just use the actual manufacturer facility location and not brand then? There is already I high degree of correlation there…

How many different factories/mfg does vortex use?
It would be interesting to look at. I don’t know where to find the manufacturing plants for all the specific models of scope though.
 
This may be true for most companies but it is clearly not true for all of them, the results show that at least SWFA and Trijicon, and likely NF as well are doing something across their lines that is causing them to do better on these drop tests.

That is true. The difference is this three all openly prioritize durability and reliability. Vortex uses the same OEM for some of their scopes, yet because of their choices in specifics they do not tend to be durable or reliable.

If just looking at broad views of it- SWFA, Trijicon and NF are clearly offer the highest probability of good service from their scopes. But, there are very good models from others as well. And when looking at those, you have to view them as a stand alone option- the Leupold Mark 4 fixed powers as an example.




As far as comparing a specific model, it would be a useful comparison but there would need to be test data for at least 4 examples of that model to have a chance at any meaningful inference.

Kind of. This is probably too deep for me to type on my phone- but think total failure of designed intent. If using a truck for instance you do not need to crash 30 of them (or even 4) if the first 1 or 2 has the motor getting pushed into the back seat from a front end 15mph impact. The design at that point is clearly and unequivocally too fragile to be safe for use.

However, to see the actual difference between two good options say between a Leupold fixed power Mark 4 and a Trijicon Tenmile- yes, lots of samples need to be tested, and rigorously so. BUT that isn’t what the initial eval is looking for- it’s functionally looking for which cars (scopes) are so unsafe (loss of zero) that they shouldn’t even be considered.
 
Back
Top