There are lots of standardized and semi-standardized tests in use in various industries to simulate vibration, movement cycles, impacts, and all sorts of other things. To be reproducible and quantitative it would require some sort of standard as you suggested, but there is no reason at all you couldn't utilize an equally standardized impact test.Form, thanks for the additional info. To answer your question about a shake table test, the table can be set to different magnitudes and frequencies of vibration. Scopes could be tested independent of a rifle and associated mounting, each tested in exactly the same way. It would "normalize" some aspects of testing, although it would not be capable of reproducing some real world effects such as impact to the objective bell or other specific location on a scope that could be significant relative to causing failure.
Shake table tests could be used to establish industry standard ratings for robustness similar to water porosity (IP) ratings for electronic gear. It would push mfgs to publish their ratings and allow consumers to make better informed decisions.
Regardless, I have a very hard time thinking an industry-wide standardized test that is demanding enough to achieve what you want would be adopted by the very companies you most need to participate. Leupold, Vortex have nothing to gain and the most to lose from participating. I think the best result you could hope to achieve here would be to get general consumers to look for and buy based on independent testing. that incorporates some impact or vibration or whatever into a real-world measure of reliability.
Also from experience, be very wary of any "industry norm" that is lenient-enough that all the companies can agree on it--unless the problem you are trying to address suddenly stops existing (i.e. scopes suddenly stop losing zero), if it isn't failing a good % of the product out there, it isn't a tough-enough standard to actually tell you anything.
I had this exact thing where I used to work--the industry tried to adopt a standardized test for climbing ropes that would be resistant to cutting on sharp rocks. All the manufacturers had different ways they thought it should be done, and different ways they were already testing internally in order to justify a marketing claim without exposing themselves to undue risk. At the end of the day the only test they could agree on was the most-lenient one, and lo and behold virtually every product made by every manufacturer worldwide passed it--it was a functionally useless test that told consumers nothing and only served to give people a false sense of security. Rather than agree on a more rigorous test that might actually provide some consumer benefit, but might exclude some manufacturers products until they invested in some different tech, the industry chose to scrap the whole thing--which was probably better for consumers in that case, but it's a real-world example of where a norm like this was adopted that actually did not provide any clarity or help consumers in any way.
Last edited: