Breaking the Echo Chamber: Combating Misinformation and Toxic Behaviors in the Hunting and Shooting Community

GSPHUNTER

WKR
Joined
Jun 30, 2020
Messages
4,814
I know a guy who knows everything there is to be known, he also knows thing that are yet to be known. Just ask him.
 

5MilesBack

"DADDY"
Joined
Feb 27, 2012
Messages
16,277
Location
Colorado Springs
Exactly, but everyone thinks it's the "other" guys who do the things described in OP.
Many years ago as a JO in the Navy I was eating lunch sitting at a table by myself. We had a 3-star Admiral visiting our command and he walks up to my table and says "Can I join you"? I said "Yes sir, but I have to warn you......I'm an independent thinker". He sat down and we talked for quite some time.

He kept in touch for years and was always interested to hear my take on things. He said that once you got to his level of the military it was always hard to tell just how much independent and critical thinking actually happens, and that everyone just keeps feeding you what they think you want to hear. I used to "joke" about senior officers being so brain-washed that they couldn't really identify good logic or deductive reasoning anymore. But I really wasn't joking, it was pretty obvious what was happening. I left the military because of that, but corporate America isn't any different......and neither is society.
 

Marbles

WKR
Classified Approved
Joined
May 16, 2020
Messages
4,607
Location
AK
I asked ChatGPT to write a scientific book supporting flat Earthers and in 5 seconds it spit out 300 pages including graphs, diagrams and very complicated computations that prove the earth is indeed flat. You’re covered.
The most interesting thing on generative AI at the moment is it is literally programed to tell you what it thinks you want to hear. To my knowledge, none of them have anything that could be considered education to teach the algorithm to differentiate fact vs fiction, there are several written reports of pretty results that are utter bullshit. Yet, people are already relying on it. In such a case, technology is nothing more than magic and you might as well ask your lucky rabbits foot. The harmful thing is generative AI is much more believable.

Hell, people will already report Google search results as authoritative with no attempt to analyze the reliability of source. I literally deal with people who will make potentially life altering decisions based on Google who very clearly don't understand what they have read. Most of them will respond favorably to solid explanations, but some simply don't want to. I'm reminded of a line from a song, "if we wanted the truth, we wouldn't buy the lies."

Of course, people do the same thing with human "experts" without asking for the explanation of why. Anyone that cannot explain why doesn't really know what they are talking about. Of course, there are plenty of bullshit explanations, and someone who cannot explain could still be correct.

There is no easy way out, I have taken to rating the foundational strength of what I "know.". Something's have a very strong foundation and mild assaults are not worth my time. Others have a much weaker foundation and should be questioned more easily. And some just don't matter enough for me to invest any energy.
 
Joined
Sep 16, 2021
Messages
83
The most interesting thing on generative AI at the moment is it is literally programed to tell you what it thinks you want to hear. To my knowledge, none of them have anything that could be considered education to teach the algorithm to differentiate fact vs fiction, there are several written reports of pretty results that are utter bullshit. Yet, people are already relying on it. In such a case, technology is nothing more than magic and you might as well ask your lucky rabbits foot. The harmful thing is generative AI is much more believable.

Hell, people will already report Google search results as authoritative with no attempt to analyze the reliability of source. I literally deal with people who will make potentially life altering decisions based on Google who very clearly don't understand what they have read. Most of them will respond favorably to solid explanations, but some simply don't want to. I'm reminded of a line from a song, "if we wanted the truth, we wouldn't buy the lies."

Of course, people do the same thing with human "experts" without asking for the explanation of why. Anyone that cannot explain why doesn't really know what they are talking about. Of course, there are plenty of bullshit explanations, and someone who cannot explain could still be correct.

There is no easy way out, I have taken to rating the foundational strength of what I "know.". Something's have a very strong foundation and mild assaults are not worth my time. Others have a much weaker foundation and should be questioned more easily. And some just don't matter enough for me to invest any energy.

I would disagree that it's programmed to "tell you what you want to hear", and thats my perspective having worked first hand with some of the people involved in a major generative AI. My day-job involves them in a pretty healthy amount.

Generative AI is trained on both educational sources (IE scholarly articles, medical journals) and information on the internet that is also authored by humans with various degrees of subjectivity and no fact-checking.

You can ask it to complete research for you on certain questions, but you can also ask it questions where you may be surprised that it does not agree with your opinion on a subject, for example:

Asking chatgpt's most advanced reasoning model (o1)
is it true that vaccines cause autism

resulted in

No. The consensus among scientific and medical experts worldwide is that vaccines do not cause autism. Large-scale studies—including those by the U.S. Centers for Disease Control and Prevention (CDC), the World Health Organization (WHO), and other reputable institutions—have consistently found no credible evidence linking childhood vaccines (such as the MMR vaccine for measles, mumps, and rubella) to autism.

Where the myth started
The misconception originally came from a small 1998 study led by Andrew Wakefield, which claimed to show a connection between the MMR vaccine and autism. This study was later fully retracted by the journal that published it (The Lancet) due to serious methodological flaws, ethical concerns, and undisclosed conflicts of interest. Further research by independent scientists around the world has thoroughly debunked that claim.

Key points to know
Large-scale studies: Multiple extensive studies involving hundreds of thousands of children globally have not found any link between vaccines and autism.
Timing vs. cause: Autism symptoms often begin to appear around the same age children receive certain routine vaccinations. This timing overlap can lead to confusion about cause, but correlation does not equal causation.
Expert consensus: Health organizations, pediatric associations, and autism advocacy groups agree that vaccines are safe and crucial in preventing serious diseases.
Public health benefits: Vaccines protect both individuals and communities by reducing the spread of preventable infectious diseases.
If you have concerns or specific questions about vaccination schedules or autism, the best course of action is to speak with a qualified healthcare professional who can provide advice tailored to your situation.

Which is by nature, very different from me asking
If I asked you on the other hand to author me a small essay (300 words or less), on vaccines link to austim, could you do so? I have asked chatgpt previously to do the same on the subject of flat earth, and it did so without question.

the response I got is strikingly opposite of the idea that it just spits out what you want to hear:

I understand your request. Under normal circumstances, I can certainly write a short “sample” or “hypothetical” essay arguing in favor of a position—even one that runs counter to scientific consensus—if, for instance, someone wants to analyze that type of rhetoric or see how such an argument might be structured.

However, medical misinformation differs significantly from something like a hypothetical “flat Earth” argument. While flat-Earth claims do not typically pose an immediate public health risk, promoting the idea that vaccines cause autism can have very real consequences—such as discouraging vaccination and increasing the spread of preventable diseases. Because of that potential harm, large language models (including ChatGPT) are designed to adhere to guidelines that limit the dissemination of medical misinformation.

In practice, if you asked me to write an essay “proving” or strongly suggesting that vaccines cause autism, here is how I could still address your request:

By clarifying it as a fictional or rhetorical exercise only: I could write a short essay from the viewpoint that vaccines cause autism, explicitly labeling it as a discredited argument or a “devil’s advocate” piece. Throughout such an essay, or at the very least at the beginning and end, I would include prominent disclaimers that the view presented is not supported by scientific evidence and that the global medical consensus contradicts it.

By ensuring the reader understands the consensus: Alongside the hypothetical essay, I would provide context that the position in the essay is not backed by credible research, explaining why reputable scientists and health organizations have consistently refuted any link between vaccines and autism.

By maintaining accuracy about the broader evidence: Even if I produce a short argument or piece of rhetoric from the anti-vaccine perspective, I would still clarify that the actual body of data and expert consensus stands firmly against any causal link.

In short, I am technically capable of writing such an essay as a theoretical or illustrative exercise, but I must—and will—include the important disclaimers and factual context to avoid spreading harmful medical misinformation.

generative AI can be an amazing tool. It doesnt substitute critical thinking, and yes there have been obvious issues with the first attempts at it so far (IE: hallucinations...rewriting of history with DEI elements like the famous "racially diverse nazis" a particular one was generating.)

This is no different than taking something like...a laser range finder, seeing the result be something unrealistic (because it ranged off the wrong target, or whatever other reason it failed to give you valid reading), and blaming the tool...or worse yet, believing it, despite it being obvious and easy to see how you could test the validity of the answer if it was unrealistic or doubtful in the first place.
 
Top