Best Long Range Lightweight Scope

I started with mk5hds and got rid of them.
They aren't that great, especially for the price

For ffp moa in that price range id probably look at Trijicon and the Maven rs1.2 offerings.
Maybe an nx6 3-18 too(?)
 
You don't have to like the test, but no need to fire shots at it. It actually is scientific and the results seem to match what many here experience during long term use of the scopes.

If you're looking for some confirmation bias on Vortex or Leupold you'll probably have to switch your search to another site.
That's cute.. your odd assumption of my need for confirmation bias. While it's an attempt at science, it's not. It is anecdotal ( it's okay, a lot of people still confuse the two), with an overwhelming opportunity for variability and lack of repeatability. Now, bring a lot of anecdotes together from the same standardized, controlled testing, without glaring opportunity for variability, and we will be talking science! I'm excited for that.

It may reflect user experience, which I'm actually more interested in than an anecdote, but still, that is suspect, and methods can be improved.
 
That's cute.. your odd assumption of my need for confirmation bias. While it's an attempt at science, it's not. It is anecdotal ( it's okay, a lot of people still confuse the two), with an overwhelming opportunity for variability and lack of repeatability. Now, bring a lot of anecdotes together from the same standardized, controlled testing, without glaring opportunity for variability, and we will be talking science! I'm excited for that.

It may reflect user experience, which I'm actually more interested in than an anecdote, but still, that is suspect, and methods can be improved.
I haven’t noticed your posts of the scientific tests you have done on scopes…or maybe the scientific tests that prove the drop tests done in the scope evals here are no more valuable than a random dude saying X scope is tough as nails.

There is logic is saying a single test that passes doesn’t necessarily mean all or most of those items will continue to pass the test. But what about failures? If the drop tests here are of no more value than the random dude in a gun store saying X scope is all you ever need, then what are the chances the one tested fails the drop test? Pretty small odds, if these assumption is the tests don’t hold true and provide value to the durability of a scope. Now, what if two of those scopes are tested and they both fail. Those odds must be getting quite low.

If you have/can/will improve on the drop tests done so far, please do so and share the results. I’d love to see someone test the durability of 20+ copies of each model scope. That would be a whole new level of useful data.
 
I haven’t noticed your posts of the scientific tests you have done on scopes…or maybe the scientific tests that prove the drop tests done in the scope evals here are no more valuable than a random dude saying X scope is tough as nails.

There is logic is saying a single test that passes doesn’t necessarily mean all or most of those items will continue to pass the test. But what about failures? If the drop tests here are of no more value than the random dude in a gun store saying X scope is all you ever need, then what are the chances the one tested fails the drop test? Pretty small odds, if these assumption is the tests don’t hold true and provide value to the durability of a scope. Now, what if two of those scopes are tested and they both fail. Those odds must be getting quite low.

If you have/can/will improve on the drop tests done so far, please do so and share the results. I’d love to see someone test the durability of 20+ copies of each model scope. That would be a whole new level of useful data.
That's what I'm talking about... increasing the N of the study, but also, changing the drop test... Make it a constant, accurate, calculated, repeatable force and point of impact. I'm not saying the concept of the study isn't extremely valuable. I'm saying we have a binary result (fails drop/passes drop) secondary to fallable, variable methods of testing.

I would suggest; place the rifle in a bench vise, drop a hammer or something with a controlled motion on several impact points... that is repeatable and accurate across each scope and rifle system. That doesn't matter if it landed on the softer ground, disperesed the impact over a broader surface area, or whatever variable entered the system.

It's the start of a very useful process, but needs refinement and a larger N.
 
That's what I'm talking about... increasing the N of the study, but also, changing the drop test... Make it a constant, accurate, calculated, repeatable force and point of impact. I'm not saying the concept of the study isn't extremely valuable. I'm saying we have a binary result (fails drop/passes drop) secondary to fallable, variable methods of testing.

I would suggest; place the rifle in a bench vise, drop a hammer or something with a controlled motion on several impact points... that is repeatable and accurate across each scope and rifle system. That doesn't matter if it landed on the softer ground, disperesed the impact over a broader surface area, or whatever variable entered the system.

It's the start of a very useful process, but needs refinement and a larger N.
Feel free to perform this yourself for everyone's benefit, or point us to where this database exists

Until then, the closest thing I've found to that is the bro science drop evals
 
Back
Top