I'm a machinist, I just hit the green buttons on the machines. Sometimes its not even the right button.OG 65
We shot it a lot more than this but evidently Ken and I didnt write down the data....
Day 1
Ken
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I'm a machinist, I just hit the green buttons on the machines. Sometimes its not even the right button.OG 65
We shot it a lot more than this but evidently Ken and I didnt write down the data....
Day 1
Maybe this is where the disconnect is. As a basic consumer looking for any help in comparing suppressors, a color coded list that lists simple comparable numbers is appealing. Until getting involved reading this thread, I had no idea that SE dBA and dBA were different measurements.
Is there not a correlation between the free data and the more detailed SE dBA you mention? So if the free data lists a set of data, can that data actually be skewed, that is only shown in the paid for report?
This is an honest question, I really don’t understand how the two can be different.
I would assume also that there are environmental influences regardless of what tools you use here right? (It's mentioned before). Ex: Outdoors, mic placement, acoustic test chamber etc.
Well, thanks for clearing up my question.I don’t want to know how a suppressor rates by Pew’s color coded formula. The simple comparable numbers are the actual SE dBA numbers. Pew isn’t simplifying things. He is complicating them to make money off the complexity he adds.
I want to look at the report and see “Manufacturer says this measures 124.6 SE dBA using 20” barrel and L118, but Pew measured it at 134.6 SE dBA using 20” barrel and L118.” That way I know not to trust that manufacturer.
Pew is charging the manufacturer for the service, showing the consumer ads on his site, and charging the consumer. He’s just an insinuating middleman at this point.
____________________
“Keep on keepin’ on…”
To me: this is why the TBAC Suppressor Summit is the most valuable source of sound ratings for suppressors. It is the most likely to produce consistent results.
So other than the cost passed on to the manufacturers, is there any substantial reason why more people decide not to use Pew for a testing standard? It sounds as if his experience in testing is what people would want, and the equipment set up is probably more robust than what anyone else has set up.
It seems reasonable for a person to charge for this service, for no other reason than to recover some of the equipment cost. Maybe your understanding of sound testing can shed a better light on how Pew tests/reports.
I appreciate the transition to more direct and concise answers and explanations. Initially your posts gave me the engineer "why didn't you do this, or that" vibe. Your recent responses are much more constructive dialogue.The spirit and intent of the TBAC Summit are very good. However, there are possible issues, or at least setup questions, that make me hesitant to say that they are an authority. I'd imagine that they will adopt the SAAMI procedure though.
I don't know Pew Guy, but I haven't seen or heard anything that would cause me to question his test methods. He's trained in structural dynamics, which is not the same as vibroacoustics, but there is crossover especially with advanced signal processing and understanding of the equipment and limitations. And of course the vibration content.
Both fields require advanced degrees and highly specialized training in the frequency domain - something that your run of the mill engineer or technician has little exposure to.
In fact, Mechanical Engineers with BS degrees seem to be the worst offenders from my work experience, in terms of fooling themselves into thinking that they understand noise. They have minimal background, unless involved with a grad program specific to acoustics and/or vibe.
Electrical engineers have tended to be much more cautious saying stupid things about noise, but they have the advantage of advanced signal acquisition and processing, providing a better understanding of the limits of their knowledge.
That's painting with broad brush, but I think you understand what I am saying.
Back to Pew Guy - I know some find him annoying to listen to, and he has a proprietary rating method that some may see as a gimmick. I don't know if the rating method is open to the public, so that it can be verified by other parties. I just don't have much interest in his rating, but am not saying it's worthless either.
Maybe this is where the disconnect is. As a basic consumer looking for any help in comparing suppressors, a color coded list that lists simple comparable numbers is appealing. Until getting involved reading this thread, I had no idea that SE dBA and dBA were different measurements.
Is there not a correlation between the free data and the more detailed SE dBA you mention? So if the free data lists a set of data, can that data actually be skewed, that is only shown in the paid for report?
This is an honest question, I really don’t understand how the two can be different.
Thank you. So there is a way to “manipulate” or “report” the data in a way that actually skews the results. I hope the industry comes up with a better solution for everyone to follow.Yes, the muzzle or mil spec left is not always a good correlation with volume at shooters ear. The disparity between the two is typically especially different with end cap breaks.
For illustration, go to the silencer summit results for bolt guns and filter by SE dBA, then filter by ML dBA. The order of performance for some cans changes significantly.
Example of a basic can without end cap brake - Nomad Ti XC. It ranked 22nd quietest at ML dBA and 7th quietest at SE dDA. That is a large discrepancy for a standard flat cap can.
Magnus RR - 6th quietest at ML dBA, 28th quietest at SE dBA.
Both above examples are out of 75 cans tested on that config.
If you think the only thing that matters is dBA while also acknowledging that dBA varies significantly based on testing variables, then I don't know what to tell you.I don’t want to know how a suppressor rates by Pew’s color coded formula. The simple comparable numbers are the actual SE dBA numbers. Pew isn’t simplifying things. He is complicating them to make money off the complexity he adds.
I want to look at the report and see “Manufacturer says this measures 124.6 SE dBA using 20” barrel and L118, but Pew measured it at 134.6 SE dBA using 20” barrel and L118.” That way I know not to trust that manufacturer.
Pew is charging the manufacturer for the service, showing the consumer ads on his site, and charging the consumer. He’s just an insinuating middleman at this point.
Edit - the free numbers cannot be easily extrapolated to a useful tool for the consumer.
____________________
“Keep on keepin’ on…”
I appreciate the transition to more direct and concise answers and explanations. Initially your posts gave me the engineer "why didn't you do this, or that" vibe. Your recent responses are much more constructive dialogue.
If you think the only thing that matters is dBA while also acknowledging that dBA varies significantly based on testing variables, then I don't know what to tell you.
I suggest you dive a little deeper into how and why Jay does his ratings the way he does.