Smell Tests| Reclaiming Truth in a Post-Fact Internet| How the Internet Broke Everyone’s Bullshit Detectors |Why Your Lies Detector Failed in the Digital Age\2026 news
Smell Tests: How the Internet Broke Everyone's Bullshit
In this information age, misinformation has become a natural enemy. If a neighbor complained about building a waste dump by a local council, you should verify it. You'll call a meeting, check a physical noticeboard, or read a printed newspaper byline. This process wasn't perfect, but it imposed a smell test—an informal, intuitive sniff for consistency, source of credibility, and basic logic.
The internet was really good at helping us learn things. Now it gives us so much bad information that we do not know what is true anymore. We are not more wrong than people were in the past. We are really sure about the wrong things we think. Our ability to figure out what is true and what is not which is something that helps us spot when someone is trying to trick us or make things sound better than they are is not working well on the internet.
This is not because we are people. It is because the internet was not made to help us tell us what is true. If we want to make things better, we need to understand how the internet made it hard for us to tell what is true and what bad things happened in the world because of this in 2026. Understanding this is the step to making it right.
The Architecture of Distrust
To understand why our detectors, fail, we must look at how they evolved. Human beings are not natural logicians; we are natural storytellers. Our default mode is "truth bias"—we tend to believe information is honest unless given a clear reason to doubt. In a small tribe or a slow-moving media environment, this works well. Liars get caught. Reputations matter.
The Internet has totally reversed it. On social platforms, reputation is decoupled from identity.
A public account with 10,000 reposts can assert that a celebrity is dead, that a vaccine contains microchips, or that a financial crash is imminent and the algorithm treats that assertion the same as a Reuters report. More perversely, if the assertion is outrageous enough, the algorithm favors it. Why? Because platforms optimize engagement, they are not always true.
The Four Mechanics of Broken Detection
Let's break down exactly how the internet has seen through the legs of our bullshit detectors.
1. Before the web, information came with a built-in credibility ladder.
A peer-reviewed study sat at the top. A wire service report sat in the middle. The internet made that ladder flat like a plane. * A tweet from a person * A TikTok video from a comedian * A PDF document from a government agency all look the same when you scroll through them. The internet treats them equally no matter who made them or where they came from. It is like one flat space where everything is just a scroll away. You can see a tweet then a TikTok, a PDF all, in a row. They all appear on the level with no clear order or ranking. The internet takes down those barriers. Makes everything look the same. Without contextual cues, the brain urged them to treat them the same. This is not stupidity; it is mentally efficiency gone wrong.
2. Speed Premium Bullshit detection requires time.https://www.wired.com/
You need to pause, check the original source, ask "Who benefits from me believing this?" and cross-reference with a known fact. The internet punishes that pause. When you are reading a feed and the stories change every three seconds, the person who does not check to see if the story is true will miss out on being part of the conversation. They will also miss out on the feeling of being important that comes with sharing something and the good feeling they get from sharing something with their friends. People think that just because they see something, it must be true. The truth is, just because people can share something quickly, it does not mean it is true. Social media is about speed and people think that if something is spreading fast it will be real. Really speed just means it is easy to share, not that is accurate
When Detector Fails: 2026 Disasters
The collapse of our lie detectors is a big problem. It has caused often sad results all over the world in 2026. Here are the viral and serious incidents from the past four months.
* United States: Louisville UPS Plane Crash (December 2025 – January 2026)
In a bad example, a UPS cargo plane crashed in Louisville. It killed 15 people. Hurt many more. As families waited for news, social media was filled with fake information made by AI.
Fake pictures showed a burning plane with the UPS logo. People falsely said that famous people like Morgan Freeman, Kid Rock, Keith Urban and Bob Dylan died in the crash. A fake video, shared over a thousand times, showed made-up firefighters trying to put out a fire next to a fake plane. The person who posted the video said it was for "awareness and educational purposes" only. Pilot and YouTube creator Trevor Smith, who runs Pilot Debrief, was shocked by the comments: "It's ridiculous. Then you see people in the comments actually believing it."
This incident shows how the collapse of lie detectors can lead to fake news. The UPS plane crash disaster was one of the incidents in 2026 where disinformation spread fast.
The lack of working lie detectors is an issue. It lets false information spread quickly. Causes realharm. We need to fix this problem to prevent disasters, like the Louisville UPS plane crash.
US: Shooting by ICE Agent in Minneapolis (Jan. 2026
Just hours after a shooting involving an agent of the Immigration and Customs Enforcement (ICE) force in Minneapolis, deep fake images of the victim and suspected shooter flooded online platforms. Firearms were utilized by the victim, Renee Nicole Good, who is 37 years old, on sight as she sat in front of the rapidly approaching suspect.
Within hours, AI-generated photos that claimed to reveal the identity of the ICE agent posted on social media platforms such as X were widely shared.
Claude Taylor, who is the head of the anti-Trump Mad Dog political action group, also shared the photos on X with the caption, "We need his name." As of the time of this writing, this post has been viewed over 1.3 million times.
Some users of X used Grok, a type of AI software, to create digital nudity images of a pre-attack smile photo of Good and her body after the attack, showing her in a bikini. Another woman who was incorrectly identified as the victim was digitally altered in the same fashion. Walter Scheirer, a professor at the University of Notre Dame, told AFP, "The innovative method has been the use of AI to fill in the gaps of a story. As an example, the use of AI to bring people to the point of identifying the ICE officer by revealing their face to the public is hallucinated information."
International - War Between Israel and Iran - Artificial Intelligence Apocalypse (February 2026 - March 2026https://themindinterface.blogspot.com/2026/03/iran-israel-war-2026-role-of-ai.html
Iran's war with Israel caused a massive amount of fake content being posted on the Internet using artificial intelligence. Many of the fake videos and images of missile strikes destroying Tel Aviv and American troops taken by Iranian soldiers have been shared tens of thousands of times through various TikTok accounts (mostly from pro-Iran users) and the total number of views was approximately 145 million within the first two weeks of the war. All these claims have been proven false.
These false tales proliferated on social media to the point where Israeli Prime Minister Binyamin Netanyahu posted a video ordering coffee in a Jerusalem coffee shop to counteract stories about his death and to prove that he still has five fingers. Jacki Alexander, the CEO of the Jerusalem-based media watchdog Honest Reporting, said that the misinformation and fake images enter the mental space of those who don't know. In an era of AI-created news cycles, people are so confused about what to believe anymore; they end up not believing either nothing or everything.
"Fake" Attacks on NASA Artemis II Moon Mission April 2026
Due to the widespread popularity of conspiracy theories, the Artemis II mission to the Moon that sent astronauts further from the Earth than any previous person has suffered from an avalanche of false information surrounding it. Many of the floating crew member claims were falsely made through hashtags like "fake space" and "fake NASA."
An artificial intelligence-generated image shared over a million times on X and purportedly portraying the crew of Artemis II floating on a green screen with a suggestion that the mission was staged in a film studio has been verified by third-party digital forensic experts. Additionally, unfounded claims of evidence found on the surface of the Moon by the mission have been heavily circulated around the internet.
Disinformation researcher Mike Rothschild told AFP, "There are many people who will automatically label any major event as fake or staged regardless of the specifics." The development reflects a larger, unregulated Wild West internet where false narratives erode trust in the digital world.
Conclusion
There are problems with your internal bullshit detector. It isn't entirely out of order, but it's been intentionally dulled by a digital space that profits through confusion. We've seen exploitation of disasters for views in 2026, generated apocalyptic imagery through AI and attained millions in revenue from it, as well as the spread of conspiracy theories regarding moon landings.
The first step to fixing your bullshit detector is acknowledging the fact that you’re not immune to them. The second step is to understand the ways that the internet exploits cognitive biases that you possess. The third step involves forming new habits—developing more caution with all information received; taking time for analysis rather than accepting everything at face value and basing feelings on probability and the credibility of the sources that provide the information.
The internet purposely broke our bullshit detectors. The only way we can begin to repair them is through intentional, thoughtful actions on our part.
Common Questions
1. What does the term “bullshit detector” mean when referring to information provided online?
Your "bullshit detector" is your ability to evaluate an individual's credibility (source), whether the information they provide is designed to elicit an emotional response, and whether the information provided meets basic logic in order to determine what information is true and what information is false.
2. How much did influencers earn by spreading false information during the 2026 war between Iran and Israel?
Analysts estimate that the influencers profited between $42 million and $47 million from 28 days of AI-generated doomsday videos and other fabricated sources of panic during that time.
3. What was the most viral disinformation campaign of January and February 2026?
The Spanish train crash conspiracy theory was the most viral disinformation campaign of January and February 2026, with over multiple languages tweeted about the conspiracy through X and 1 Polish post received 1.4 million views in the first 48 hours after it was posted.
4. How did artificially intelligent (AI) technology magnify the effects of misinformation about the ICE (Immigrations and Customs Enforcement) shooting in Minneapolis, Minnesota?
Users utilized Grok; an AI-powered chatbot and the social networking site X to create realistic-looking images of an unidentified ICE agent and digitally undressed the image of the victim.


Comments
Post a Comment