Very interesting thread. I'm new to the forum, don't know any one here (to my knowledge), and thus don't have any preconceived bias. Retired now, but come from a scientific background, involving studies in areas with lots of inherent uncertainty, and thus requiring lots of rigorous study plans, lots of statistical analyses, and generating results that, many times, in the end still resulted in (to be kind) a lack of agreement among all involved.
So now, boldly attempting to objectively evaluate scopes (why not pick vehicles and really make people mad, i.e., Ford vs. Chevy vs. Toyota), Form is being savaged by some, and hugged by others. Let's not make it personal.
Clearly, as stated, these tests are based on a sample size of one. n=1. That is a data point, but that's a dot on a page. A larger sample size would fix that (so measures of dispersal could be calculated), but then, as pointed out, who's going to bankroll that for one scope model? And subsequent ones, to ultimately allow comparisons.
I can't really support or criticize the testing methods. In a good test, you want to analyze only one variable at a time, and I guess maintaining "design function" (RTZ, etc.) is just that. Can't recall if these functions were measured prior to the drop tests or not. Presume so; should have been.
Sorry for rambling. Bottom line, I think these trials are interesting. I'd think, and hope, that if people see shortcomings that can reasonably be addressed, they ought to make those suggestions. And maybe invite some of the scope mfgr's reps to chime in, with such suggestions. Optics are very precise mechanical devices, and scopes lead much harder lives than many others - - - used in heat & cold, wet & dry, subject to vibration and impacts (from shooting, if nothing else). But that's the environment they are expected to live in.
There's still going to be the n=1 problem, but Form is already investing plenty of money and his time (also=$).