How much zoom do you like on a 500 yard hunting shot?

There are frequently comments here about "spotting your shot" and managing magnification to achieve this.

There is of course a spectrum of what "spotting the shot" looks like, and achieving different points on that spectrum requires different approaches.

I shoot a lot (hundreds-1k+) of animals per year with a .223, usually from 0 to 500 metres. If I am shooting groups of animals and wish to be able to locate the next animal rapidly, I will frequently use lower magnification - 6-8x for a wider field of view and faster transition. However this doesn't allow me to see the actual bullet impact on the animals very well; I can see the effect of the impact but not the exact location. To see this with a .223 at longer ranges (400+m) I usually need to be up in the 12-18x. At longer ranges with the .223 I like to see where the bullet has gone in to know that it's good as follow-ups on a poor shot are more difficult.

Seeing a miss in the dirt, or the general effect of the shot on the animal, usually requires less magnification.

A larger bullet makes a more obvious impact, but conversely is harder to watch through the recoil.
 
What research are you referring to? Was it done with scientific controls?

I've read a lot of opinions, conjecture, and individual accounts attributing POI problems to a Leupold scope, but I have not read any scientific research or the results of properly controlled experiments that back up that conclusion. Empirical data is a starting point for research, not the end result. As I have said before, Form's drop tests have value, but they should not be considered controlled scientific experiments. Nobody, to my knowledge, is doing any properly controlled experiments nor have testing methodologies with repeatable conditions been established.

It is so unpopular to deviate from the anti-Leupold opinion often expressed on Rokslide that it is rarely done, but at times the Leupold users who have never had problems do speak up, and they are not few in number. This alone makes me wary that group think may be behind much of the Leupold bashing, but I am not close minded about it either. The bashing may be justifiable. With that in mind, as I said, if I were to buy another scope now, it would be a Trijicon.

Please define:

“scientific controls”.

“Properly controlled experiments”

“controlled scientific experiments”
 
Please define:

“scientific controls”.

“Properly controlled experiments”

“controlled scientific experiments”

Sure, at least what it means to me in regard to the subject at hand. All quoted terms refer to the controls applied in any experiment to ensure consistency in the testing methodology.

All variations in input forces, such as amount of force, angle of application, and speed of application, are controlled to be the same for each item exposed to the input. Equipment differences that can mitigate the effects of the input are eliminated such as differences in mounting (both rings used and where they are located on a scope), tightness of screws and any other factor that can absorb the input forces. Post input testing is done on equipment independent of that used for application of the input forces and is likewise controlled to ensure differences between tested items are eliminated.

Much of what I am describing on the input side for scopes can be done on a shaker table, and is done for seismic qualification testing of equipment for nuclear power plants. Other aspects can be done in a manner similar to how Charpy V-notch testing is done. Given the issues your testing has made obvious, I don't understand why the mfgs have not developed a standard testing methodology for scopes as has been done for so many other equipment items.
 
Without a spotter, whatever allows me to spot the shot. That’s usually 10x with my main rifle.

In magnums (where I’m not spotting the shot anyway) or with a spotter, I’ll zoom in a good bit more. Especially on smaller game.

Some people take economizing zoom as some proxy for manliness. Like liking spicy food or drinking shitty beer. It’s overrated.
 
2025 ELk at 740 on 6X. On MOA sized targets and larger out to about 1400 yards I stay between 6-9X (I would probably use a higher power past 1,000 but my scopes don't go that far) But it also has not been an issue. When I'm getting a 20-30 shot zero at 100 I max em out... I dont shoot good enough to spot my shot on really high power inside of 300ish with my Fast 6mm, I can get away with it at longer distances or at any distance with a 223 but there really is no point. Aside from spotting shots, a couple reasons I like a lower power is.. I can see where the animal moves if its not a bang/flop, and I can see what is Infront, behind, around or may move into my shot ie other animals.
 
I spent last season with a fixed 10x on my hunting rifle. Practiced on steel out to 1000 quite a bit leading up to hunting season. I have zero issues with that scope out to 1000, but keep in mind that I practice and hunt in pretty wide open country. Longest shot taken at an animal last season was 400 yards on a mule deer buck.
 
Sure, at least what it means to me in regard to the subject at hand. All quoted terms refer to the controls applied in any experiment to ensure consistency in the testing methodology.

All variations in input forces, such as amount of force, angle of application, and speed of application, are controlled to be the same for each item exposed to the input. Equipment differences that can mitigate the effects of the input are eliminated such as differences in mounting (both rings used and where they are located on a scope), tightness of screws and any other factor that can absorb the input forces. Post input testing is done on equipment independent of that used for application of the input forces and is likewise controlled to ensure differences between tested items are eliminated.

Much of what I am describing on the input side for scopes can be done on a shaker table, and is done for seismic qualification testing of equipment for nuclear power plants. Other aspects can be done in a manner similar to how Charpy V-notch testing is done. Given the issues your testing has made obvious, I don't understand why the mfgs have not developed a standard testing methodology for scopes as has been done for so many other equipment items.
Testing is extremely expensive. Manufacturers dont do things like that in a public/standardized way unless it addresses a significant liability concern across the industry. This is your nuclear facility, or any standardized test such as ANSI or UIAA standard for safety equipment. These aren't done for consumer information, they are done to protect the company from lawsuits after someone gets hurt. The “standard” is developed so the industry and companies within it have something concrete to
point to, to say “see, our product meets the safety standard, therefore its not our fault”. There is no liability concern with a rifle scope, so while each manufacturer may or may not test zero-retention for their own qc or development, an industry-wide standard simply isnt going to happen.

As a consumer, you can rely on the “quick and dirty” evaluations here which seems to have pretty repeatable results even given some understandable level of criticism. Or you can come up with your own methodology. Or you can try to sort thru the conflicting claims online based on where it overlaps with your personal experience. Or you can believe there's no issue and roll the dice. Those are your choices as far as I can tell, so pick one—nowhere else have I seen any attempt whatsoever to test zero retention in at all a transparent way thats available to consumers (which is the only way for it to have value). If another option exists Im all ears.

Regardless, real engineers and qc people do “quick and dirty” tests all the time, many of which are no more controlled than the scope evals here, and they even make decisions based on them on a regular basis. Not every test needs to be done to the nines in order to have value. In fact in many cases Id argue there’s MORE value to a “homegrown” test that can be performed anywhere by virtually anyone, when the alternative is non-existant or unavailable.

This has all been argued ad nauseum here many times.

I dont know that this is super relevant to zoom level for 500 yard hunting shots though.
 
Given the issues your testing has made obvious, I don't understand why the mfgs have not developed a standard testing methodology for scopes as has been done for so many other equipment items.
Most likely explanation: they don't develop better tests because they've made a living selling scopes that would fail and it would cost them a lot to change, much less admit their older stuff is flawed.

The drop tests done here are hands down the best thing in the hunting riflescope industry right now and I hope they sell a bazillion of the new S2H scopes and I hope other makers step up.

Also, I hope the drop testing of other scopes continues after the S2H scope comes to market. Yea, sure, a test fixture to allow perfect consistency, would be nice, and with proper design all testing could be done with a collimator instead of a live fire test - in theory - but until someone steps up and designs such a fixture and makes it an industry standard, I'll continue to be wildly in favor of the drop testing we see here. It is truly something unique in the gun industry.
 
The drop tests done here are hands down the best thing in the hunting riflescope industry right now...
I absolutely agree and appreciate what has been done.

Testing is extremely expensive. Manufacturers dont do things like that in a public/standardized way unless it addresses a significant liability concern across the industry.
Although liability concerns do drive standardized testing, it isn't correct that standards are not also developed for marketing reasons. IP68 for consumer electronics (water/dust intrusion) is one example. ISO 23537 for thermal performance of sleeping bags is another.

I dont know that this is super relevant to zoom level for 500 yard hunting shots though.
True. We got here from Leupold bashing, which was also not relevant to the original querry. My apologies to the OP.
 
Sure, at least what it means to me in regard to the subject at hand.


“What it means to you”- is the issue.


These are the steps of the scientific method:

Step 1: Make observations
Step 2: Propose a hypothesis to explain observations
Step 3: Test the hypothesis with further observations or experiments
Step 4: Analyze data
Step 5: State conclusions about hypothesis based on data analysis



Which of those do you see that the scope evals are not following?


All quoted terms refer to the controls applied in any experiment to ensure consistency in the testing methodology.


What controls do you see that the scope eval isn’t doing?

Don’t just say- “oh it’s a gun”, or “oh it’s a human”- legitimate tests are done with human input. To be legitimate, repeatable and reproducible does not require removing the human element.



All variations in input forces, such as amount of force, angle of application, and speed of application, are controlled to be the same for each item exposed to the input.

They are controlled. Nothing is 100%- not even machines. Tests must be repeatable, and reproducible to be legitimate.

The standards and “test” procedures of the scope eval are given. The variables are controlled such that the outcomes/results are consistent, repeatable, and reproducible by others following the same procedures.



Equipment differences that can mitigate the effects of the input are eliminated such as differences in mounting (both rings used and where they are located on a scope),


Every scope has a different main tube length- therefor this requirement means no scope can be measured.


tightness of screws

You haven’t read, or you haven’t understood the standards or the evals if you are saying this.



and any other factor that can absorb the input forces.

You mean like controlling for variables? Such as bonded chassis and rail, proofing the system before testing, same shooter, same ammo, same location, and reproofing the system post test? And then stating items that can’t be controlled for and potential sources of errors- just like the evals do?


Post input testing is done on equipment independent of that used for application of the input forces and is likewise controlled to ensure differences between tested items are eliminated.

Again, you either haven’t read or understand the “test” procedures if you are saying this.



Much of what I am describing on the input side for scopes can be done on a shaker table,


No, it cannot. “Shaking” isn’t a problem with most scopes. Impacts are- they are not the same.



and is done for seismic qualification testing of equipment for nuclear power plants.

Is that supposed to mean something to an optical device with a reticle that aligns with a target?


Other aspects can be done in a manner similar to how Charpy V-notch testing is done.

Oh please explain how charpy testing applies to scopes.


Given the issues your testing has made obvious, I don't understand why the mfgs have not developed a standard testing methodology for scopes as has been done for so many other equipment items.

Well, they generally don’t understand testing either, and they also have no desire for the truth to be known widely.
 
Much of what I am describing on the input side for scopes can be done on a shaker table,

Form:
No, it cannot. “Shaking” isn’t a problem with most scopes. Impacts are- they are not the same.

Of course vibration and impact are different but folks complain about scopes losing zero from vibration, like riding in a SxS, or riding in a truck travelling to a hunt location. A shaker table can simulate those conditions. Yes, not an impact test, but is it not a test that could be performed for factors affecting the ability to maintain zero under conditions normally experienced? Also, a shaker table can simulate the impulse to a scope mounted on a rifle when the rifle is what hits the ground and not the scope.

and is done for seismic qualification testing of equipment for nuclear power plants.

Form:
Is that supposed to mean something to an optical device with a reticle that aligns with a target?

Yes, absolutely. It is test method that can demonstrate whether an item is robust or not or to determine what level of input causes failure.


Other aspects can be done in a manner similar to how Charpy V-notch testing is done.

Form:
Oh please explain how charpy testing applies to scopes.

I didn't say Charpy testing applies to scopes but that a similar method could be used on scopes. Charpy testing is done by causing a specific impact force on a test specimen using a weighted swing arm. Essentially the same arrangement could be used to apply identical impact forces to a specific location on a scope. I didn't know it at the time I wrote what you quoted, but that method is used in MIL-S-901D to shock test shipboard equipment.


My intent is not to criticize or diminish the value of the testing you have done but rather to lobby for a standard used by all scope mfgs. There is a place for something widely recognized as independent to dispense with any suggestions of favoritism or individual bias that consumers can use to evaluate purchase options. You and others have shown that scope designs and manufacturing processes have evolved to the point where scopes can withstand shocks and retain zero, if built to do so. It's time we had an accepted standard for comparison.
 
Although liability concerns do drive standardized testing, it isn't correct that standards are not also developed for marketing reasons. IP68 for consumer electronics (water/dust intrusion) is one example. ISO 23537 for thermal performance of sleeping bags is another.
True. However, in both of those cases another primary motivator for adopting an industry standard was retailer pressure, other global markets demanding better info, and a big one—to protect “honest” manufacturers from having to compete against similar (but BS) marketing claims from inferior products. Ie there was a major upside for the industry. At this point its patently clear the major manufacturers have no interest, which means they dont see the up-side being enough to bother. Someone has to take on the project which is very $$$$$, but they arent going to unless its financialy worthwhile. Maybe zero tech? 😁
 
Of course vibration and impact are different but folks complain about scopes losing zero from vibration, like riding in a SxS, or riding in a truck travelling to a hunt location. A shaker table can simulate those conditions. Yes, not an impact test, but is it not a test that could be performed for factors affecting the ability to maintain zero under conditions normally experienced?

Scopes on a shaker table can’t be collimated. Or not easily. But, it would work for Zero retention from vibrations.



Also, a shaker table can simulate the impulse to a scope mounted on a rifle when the rifle is what hits the ground and not the scope.

What use is that? Often enough the scopes themselves often bend or get out of whack. Removing the rifle does not make it more applicable or better, the opposite in fact. The rifle and mounts are functionally one and the same with the scope, and how scopes react to being dropped with attached to a rifle is the whole issue.



Yes, absolutely. It is test method that can demonstrate whether an item is robust or not or to determine what level of input causes failure.

Please explain.



I didn't say Charpy testing applies to scopes but that a similar method could be used on scopes. Charpy testing is done by causing a specific impact force on a test specimen using a weighted swing arm. Essentially the same arrangement could be used to apply identical impact forces to a specific location on a scope.


It’s been done. It wasn’t worth the time or effort, and the slight randomness is more reality than “it will be hit at exactly this spot, only this spot, and only with “X” force. That’s a surefire way for manufacture’s to design scopes that pass the “test”, but still fail in reality on rifles.

If you care about your rifle holding zero when it is dropped it takes an impact, you WANT it tested for drops and impacts on a rifle.


I didn't know it at the time I wrote what you quoted, but that method is used in MIL-S-901D to shock test shipboard equipment.

Not impacts zero shifts for optical sighting systems.


My intent is not to criticize or diminish the value of the testing you have done but rather to lobby for a standard used by all scope mfgs.


It’s fine if you are. The issue is like most others, you are stating things that are fundamentally not true.


The reality is that the Drop evals do follow the scientific method, and it does follow proper testing procedures and processes. It isn’t random and it isn’t uncontrolled.


There is a place for something widely recognized as independent to dispense with any suggestions of favoritism or individual bias that consumers can use to evaluate purchase options. You and others have shown that scope designs and manufacturing processes have evolved to the point where scopes can withstand shocks and retain zero, if built to do so. It's time we had an accepted standard for comparison.

That would’ve great if it wasn’t a BS test. However, it isn’t going to happen. Literally no one wants that in the industry.
 
That would’ve great if it wasn’t a BS test. However, it isn’t going to happen. Literally no one wants that in the industry.

Not if we don't push for it and just accept the status quo, or take the approach that what you have done is all that is needed so why bother do anything else. I approciate your work and agree with you on many points but disagree on others. That's where I will leave it.
 
I shot a bull elk at 640yds this year and I had a NX8 4x32 on my rifle. When I got back to camp, I made a point to look at my magnification. I believe I was around 10x. Light was fading and I turned it up just enough to see the center of reticle well and wind holds while still maintaining a good FOV. My eyes are horrible too, most anyone else that isn't blind could have done 8x.

Prior to taking the S2H class, that puppy would have been cranked to at least 25x for that shot.
 
Back
Top