- Thread Starter
- #41
cytherian
FNG
- Joined
- Nov 7, 2022
- Messages
- 28
FWIW, to add... and this is my layman's opinion here...
Looking at a group of 10 binos in a given price bracket, it's unfair to go sequentially through the whole list and size up characteristics like this. Because of human subjectivity, the previous bino characteristics are going to influence the next bino examined.
My take would be to cover each bino with the same material, all branding covered over with black tape, and even the focus wheel covered with some thin tape. Each bino gets a number. You go sequentially through the list and make your observations. THEN... the moderator of the shoot-out changes the numbers and keeps track of the assignments. Then the binos are repositioned into a new 1 to 10 order. Repeat the testing.
This way there's no inherent bias by the brand / model known. AND, you mitigate a bit of influence from sequential order of review. I think this way you'd really get a better sense of characteristics.
Lastly, once the second test completes, then add up the responses and identify the "top 3." Now, each person looks at all three, back and forth. Make note of the characteristics and prefered ranking. Moderator collects the data and then ranks them according to consensus. Now, reveal all! After something like this, I think you'd get a really good "nearly objective" assessment of bino performance.
Looking at a group of 10 binos in a given price bracket, it's unfair to go sequentially through the whole list and size up characteristics like this. Because of human subjectivity, the previous bino characteristics are going to influence the next bino examined.
My take would be to cover each bino with the same material, all branding covered over with black tape, and even the focus wheel covered with some thin tape. Each bino gets a number. You go sequentially through the list and make your observations. THEN... the moderator of the shoot-out changes the numbers and keeps track of the assignments. Then the binos are repositioned into a new 1 to 10 order. Repeat the testing.
This way there's no inherent bias by the brand / model known. AND, you mitigate a bit of influence from sequential order of review. I think this way you'd really get a better sense of characteristics.
Lastly, once the second test completes, then add up the responses and identify the "top 3." Now, each person looks at all three, back and forth. Make note of the characteristics and prefered ranking. Moderator collects the data and then ranks them according to consensus. Now, reveal all! After something like this, I think you'd get a really good "nearly objective" assessment of bino performance.