Unknown suppressors OG testing

Maybe this is where the disconnect is. As a basic consumer looking for any help in comparing suppressors, a color coded list that lists simple comparable numbers is appealing. Until getting involved reading this thread, I had no idea that SE dBA and dBA were different measurements.

Is there not a correlation between the free data and the more detailed SE dBA you mention? So if the free data lists a set of data, can that data actually be skewed, that is only shown in the paid for report?

This is an honest question, I really don’t understand how the two can be different.

I don’t want to know how a suppressor rates by Pew’s color coded formula. The simple comparable numbers are the actual SE dBA numbers. Pew isn’t simplifying things. He is complicating them to make money off the complexity he adds.

I want to look at the report and see “Manufacturer says this measures 124.6 SE dBA using 20” barrel and L118, but Pew measured it at 134.6 SE dBA using 20” barrel and L118.” That way I know not to trust that manufacturer.

Pew is charging the manufacturer for the service, showing the consumer ads on his site, and charging the consumer. He’s just an insinuating middleman at this point.

Edit - the free numbers cannot be easily extrapolated to a useful tool for the consumer.


____________________
“Keep on keepin’ on…”
 
I would assume also that there are environmental influences regardless of what tools you use here right? (It's mentioned before). Ex: Outdoors, mic placement, acoustic test chamber etc.

Yes. Which is why the TBAC summit using the same setup in the same location is the best way to test this stuff.


____________________
“Keep on keepin’ on…”
 
I don’t want to know how a suppressor rates by Pew’s color coded formula. The simple comparable numbers are the actual SE dBA numbers. Pew isn’t simplifying things. He is complicating them to make money off the complexity he adds.

I want to look at the report and see “Manufacturer says this measures 124.6 SE dBA using 20” barrel and L118, but Pew measured it at 134.6 SE dBA using 20” barrel and L118.” That way I know not to trust that manufacturer.

Pew is charging the manufacturer for the service, showing the consumer ads on his site, and charging the consumer. He’s just an insinuating middleman at this point.


____________________
“Keep on keepin’ on…”
Well, thanks for clearing up my question.
 
To me: this is why the TBAC Suppressor Summit is the most valuable source of sound ratings for suppressors. It is the most likely to produce consistent results.

The spirit and intent of the TBAC Summit are very good. However, there are possible issues, or at least setup questions, that make me hesitant to say that they are an authority. I'd imagine that they will adopt the SAAMI procedure though.


So other than the cost passed on to the manufacturers, is there any substantial reason why more people decide not to use Pew for a testing standard? It sounds as if his experience in testing is what people would want, and the equipment set up is probably more robust than what anyone else has set up.
It seems reasonable for a person to charge for this service, for no other reason than to recover some of the equipment cost. Maybe your understanding of sound testing can shed a better light on how Pew tests/reports.

I don't know Pew Guy, but I haven't seen or heard anything that would cause me to question his test methods. He's trained in structural dynamics, which is not the same as vibroacoustics, but there is crossover especially with advanced signal processing and understanding of the equipment and limitations. And of course the vibration content.

Both fields require advanced degrees and highly specialized training in the frequency domain - something that your run of the mill engineer or technician has little exposure to.

In fact, Mechanical Engineers with BS degrees seem to be the worst offenders from my work experience, in terms of fooling themselves into thinking that they understand noise. They have minimal background, unless involved with a grad program specific to acoustics and/or vibe.

Electrical engineers have tended to be much more cautious saying stupid things about noise, but they have the advantage of advanced signal acquisition and processing, providing a better understanding of the limits of their knowledge.

That's painting with broad brush, but I think you understand what I am saying.

Back to Pew Guy - I know some find him annoying to listen to, and he has a proprietary rating method that some may see as a gimmick. I don't know if the rating method is open to the public, so that it can be verified by other parties. I just don't have much interest in his rating, but am not saying it's worthless either.
 
The spirit and intent of the TBAC Summit are very good. However, there are possible issues, or at least setup questions, that make me hesitant to say that they are an authority. I'd imagine that they will adopt the SAAMI procedure though.




I don't know Pew Guy, but I haven't seen or heard anything that would cause me to question his test methods. He's trained in structural dynamics, which is not the same as vibroacoustics, but there is crossover especially with advanced signal processing and understanding of the equipment and limitations. And of course the vibration content.

Both fields require advanced degrees and highly specialized training in the frequency domain - something that your run of the mill engineer or technician has little exposure to.

In fact, Mechanical Engineers with BS degrees seem to be the worst offenders from my work experience, in terms of fooling themselves into thinking that they understand noise. They have minimal background, unless involved with a grad program specific to acoustics and/or vibe.

Electrical engineers have tended to be much more cautious saying stupid things about noise, but they have the advantage of advanced signal acquisition and processing, providing a better understanding of the limits of their knowledge.

That's painting with broad brush, but I think you understand what I am saying.

Back to Pew Guy - I know some find him annoying to listen to, and he has a proprietary rating method that some may see as a gimmick. I don't know if the rating method is open to the public, so that it can be verified by other parties. I just don't have much interest in his rating, but am not saying it's worthless either.
I appreciate the transition to more direct and concise answers and explanations. Initially your posts gave me the engineer "why didn't you do this, or that" vibe. Your recent responses are much more constructive dialogue.
 
Maybe this is where the disconnect is. As a basic consumer looking for any help in comparing suppressors, a color coded list that lists simple comparable numbers is appealing. Until getting involved reading this thread, I had no idea that SE dBA and dBA were different measurements.

Is there not a correlation between the free data and the more detailed SE dBA you mention? So if the free data lists a set of data, can that data actually be skewed, that is only shown in the paid for report?

This is an honest question, I really don’t understand how the two can be different.

Yes, the muzzle or mil spec left is not always a good correlation with volume at shooters ear. The disparity between the two is typically larger with end cap brakes (usually hurts performance more at shooters ear compared to Mil spec left locations)

For illustration, go to the silencer summit results for bolt guns and filter by SE dBA, then filter by ML dBA. The order of performance for some cans changes significantly.
Example of a basic can without end cap brake - Nomad Ti XC. It ranked 22nd quietest at ML dBA and 7th quietest at SE dBA. That is a large discrepancy for a standard flat cap can.
Magnus RR - 6th quietest at ML dBA, 28th quietest at SE dBA.
Both above examples are out of 75 cans tested on that config.

Edit, one more example: Magnus K vs Magnus K RR. The Magnus K RR (version with brake at the end) metered 0.2 dBA QUIETER at mil-spec left than the standard Magnus K but 10.4 dBA LOUDER at the shooters ear.
 
Yes, the muzzle or mil spec left is not always a good correlation with volume at shooters ear. The disparity between the two is typically especially different with end cap breaks.

For illustration, go to the silencer summit results for bolt guns and filter by SE dBA, then filter by ML dBA. The order of performance for some cans changes significantly.
Example of a basic can without end cap brake - Nomad Ti XC. It ranked 22nd quietest at ML dBA and 7th quietest at SE dDA. That is a large discrepancy for a standard flat cap can.
Magnus RR - 6th quietest at ML dBA, 28th quietest at SE dBA.
Both above examples are out of 75 cans tested on that config.
Thank you. So there is a way to “manipulate” or “report” the data in a way that actually skews the results. I hope the industry comes up with a better solution for everyone to follow.
 
I don’t want to know how a suppressor rates by Pew’s color coded formula. The simple comparable numbers are the actual SE dBA numbers. Pew isn’t simplifying things. He is complicating them to make money off the complexity he adds.

I want to look at the report and see “Manufacturer says this measures 124.6 SE dBA using 20” barrel and L118, but Pew measured it at 134.6 SE dBA using 20” barrel and L118.” That way I know not to trust that manufacturer.

Pew is charging the manufacturer for the service, showing the consumer ads on his site, and charging the consumer. He’s just an insinuating middleman at this point.

Edit - the free numbers cannot be easily extrapolated to a useful tool for the consumer.


____________________
“Keep on keepin’ on…”
If you think the only thing that matters is dBA while also acknowledging that dBA varies significantly based on testing variables, then I don't know what to tell you.

I suggest you dive a little deeper into how and why Jay does his ratings the way he does.
 
I appreciate the transition to more direct and concise answers and explanations. Initially your posts gave me the engineer "why didn't you do this, or that" vibe. Your recent responses are much more constructive dialogue.

Thanks for the feedback.

Part of my reluctance to provide direct feedback or guidance is due to the simple fact that the situation here is a bit weird. We have Rokslide, that takes advertising dollars, but also publishes product reviews. Robby actually made a podcast to address this potential conflict of interest, but I don't know how many people are aware of it.

For the OG, there are obvious questions that arise regarding conflicts of interest where Ryan seems to be involved in making suppressors, but is also a co-owner of Rokslide? Or, is he not involved with Unknown Suppresors and Unknown Munitions at all? Maybe I missed it.

Plus you have Shoot2Hunt advertising here (maybe as a sponsor?), a shooting school w/forum badges given out (S2H badges, not Rokslide badges), and prod dev (stocks, rifle, scope) potentially competing against advertisers, etc.

I'm not saying that Ryan, Unknown Guys, or Formi are being deceitful but it's all a bit weird. And as I stated much earlier in one of these threads, I want to remain impartial to all of this and not be viewed as consulting for a product that may get preferential treatment by the forum.

I think Rokslide would benefit by disclosing the relationships more clearly to the consumer. Ryan, Unknown Guys, and Formi are probably busy with new ideas, hunting, shooting, family, etc and don't realize how it looks from the outside.

So my earlier questions and comments were geared towards simply determining technical competence of the people involved, and not providing suggestions for improvement within this grey area of interests. For what it's worth, I assess competence processes monthly, if not weekly, for various industries. Not intelligence, which seems to piss people off when I mention competence, but the qualifications to perform technical work.

What changed? Another manufacturer posted here, so I eased up a bit, as they are not co-owners of Rokslide. The company has a very interesting design, and have been gracious in the way questions have been handled. I actually think cans/silencers/suppressors are pretty boring, even though I have an acoustics background and DAQ for testing, but that design caught my eye immediately. And I haven't seen anything exciting in awhile.

Anyway, I reached out to that company and have realized that US/UM would benefit from understanding the basics of log units, meters, settings, etc.

And maybe more importantly, showing forum members what questions to ask for? There will always be a newer, better, less expensive design around the corner!
 
If you think the only thing that matters is dBA while also acknowledging that dBA varies significantly based on testing variables, then I don't know what to tell you.

I suggest you dive a little deeper into how and why Jay does his ratings the way he does.

What matters more than SE dBA if I am comparing the sound of two suppressors and what to understand the benefit to the shooter from using the device?

All suppressors are a compromise of suppression, weight, length, diameter, construction material, price, etc.

Manufacturers try to put each of these factors in the best light. Some try to pick the testing conditions or cherry-pick the data. Some don’t list the necessary adapters as part of the length or weight.

What does Pew add that isn’t covered by the TBAC Summit results?


____________________
“Keep on keepin’ on…”
 
We wanted to send our suppressors to the TBAC silencer summit this year but our company was too new for them to do it. Understandably. If every new company sent a suppressor there they would be doing testing for a month straight. That said, we are planning on sending any can they will take for the 2026 summit.
 
Back
Top