@UpSideDown i agree with you that the scope evals are misused by some, but you are falling victim to that mistake yourself. It is spelled out in the explanation that the drop eval is meant to be predictive of a future failure from general use that DOES NOT necessarily have an identified “trigger” such as dropping the scope. Its easy to deal with checking zero when its caused by a fall or impact, but thats completely missing the whole point—the merit of the test is that it provides some small amount of objectivity on how likely a specific model scope is to fail from general hunting use over time that is otherwise non-existant, in addition to how likely it is to be affected by relatively normal impacts. My limited experience, where it iverlaps with the evals, matches—so I have multiple data points to draw a line and extrapolate from. It’s literally the ONLY thing out there that attempts to evaluate this. Replace it with something more objective, more quantifiable, and I and everyone else will happily use another criteria, but until there is ANY other way to accomplish this, what objective reason is there for anyone concerned with reliability to choose a scope from among those that have consistently done poorly…when there are good options that have done well? To me, even though I see legit issues with the eval, the risk of ignoring the trends I see in the evals is FAR greater than the risk of taking them into account. There is zero additional risk to me if I take them into account.