The Overhype Of AI In The News Today
- Christy Mackenzie

- Nov 16
- 9 min read

Why we need less magic-show coverage and more fact-checking on what AI can really do.
You can feel it before the anchor even says the word. The dramatic soundtrack, futuristic b-roll clips, neon lines wrapping around a spinning globe.
Then the script:
AI is about to replace your job, rewrite democracy, cure cancer, invent art, and possibly end civilization by Tuesday.
For the last few years, artificial intelligence has been treated as the main character in almost every news cycle. It is framed as an unstoppable force that will either save us or destroy us, with very little room for the messy middle where most real technology lives. The incentives are obvious: AI stories rate well, attract investment, and keep everyone hooked on the narrative that something world-historic is happening every week.
But beneath the cinematic hype, the numbers tell a different story. Many AI projects are failing quietly. Real-world deployments are hitting walls. Models still hallucinate confident nonsense. Public trust is fragile and trending downward. And the media often repeats bold claims from companies and consultants without applying the same basic skepticism used for political polling or pharma trials.
This piece is not an argument that AI is fake or useless. It is an argument that AI is overhyped in the news today, and that we urgently need more scrutiny on the claims that keep that hype alive.
1. How AI became the hero of every storyline
Every major technology wave goes through a hype cycle, but generative AI has been sprinting through it at record speed. Gartner now places generative AI past the Peak of Inflated Expectations and heading into the Trough of Disillusionment, after a period where it was promoted as the answer to almost every business and societal problem.
At the same time, media coverage leaned into three familiar frames:
The miracle story - AI as magic productivity engine: instant content, instant insights, instant profits.
The doom story - AI as existential threat: mass unemployment, runaway superintelligence, democracy collapse.
The arms-race story - AI as geopolitical weapon: whoever wins the AI race runs the future.
These frames are dramatic and clickable. They are also very convenient for the companies selling AI. If AI is framed as inevitable and revolutionary, it is easier to justify massive spending, sky-high valuations, and rushed deployments.
But when you zoom out and ask a simple, boring question - what is actually working, at scale, right now - the picture looks very different.
2. The productivity miracle that mostly exists on slides
Consider the business side. AI is often presented as a once-in-a-generation productivity shock. Yet a recent MIT study on the state of AI in business found that around 95% of generative AI projects are failing to produce meaningful outcomes, despite tens of billions of dollars invested in tools and startups.
In other words, most of the promised transformation is not showing up in measurable revenue or cost savings.
It is showing up in:
Pilot projects that never leave the lab.
Demos built for conferences, not customers.
Internal slide decks and executive talking points.
A separate survey highlighted by business media found that 62% of employees believe AI is overhyped. For many workers, AI rollout has meant more tools to manage, more dashboards to check, and more pressure to perform, without the promised reduction in workload or stress.
"Hype is not just optimism. It is a business strategy that trades on our inability to check the math."
These numbers do not mean AI is useless. They mean that news segments showing heroic gains from a single chatbot deployment are often cherry-picked edge cases, not the norm. When the media repeats those case studies without asking how many failed projects sat behind them, it amplifies a distorted picture of progress.
3. The hallucination problem that headlines keep flattening
The other side of overhype is underplaying the risks. Large language models are very good at producing fluent text. They are not built to know whether that text is true.
Researchers and reporters have documented a long list of real world harms from AI hallucinations:
An AI summary falsely accused a real person of embezzlement in a legal context, leading to a lawsuit against the model provider.
AI-generated travel content recommended a food bank as a tourist attraction, turning poverty infrastructure into vacation fodder.
AI-generated images of an explosion near the Pentagon circulated widely in 2023, briefly moving the stock market before being debunked.
Stanford researchers studying AI in legal settings found that general purpose chatbots hallucinated in more than half of legal queries in one benchmark, and that even specialized legal models still invented facts in roughly one out of six answers.
Other work has shown that hallucinations and bias are not rare glitches but structural features of how these models are trained. They remix patterns from training data and optimize for plausible text, not verified truth.
Yet news coverage often slides from "AI wrote an article" to "AI knows this," as if a polished paragraph implies accuracy. When failures are covered, they are treated as funny bloopers rather than serious warnings about deploying unverified systems into law, healthcare, finance, or public information.
If another institution - a bank, a hospital, a political candidate - made up facts at anything close to these rates, reporters would not shrug it off as growing pains. They would ask hard questions about accountability.
4. The trust gap that AI hype is quietly widening
Public opinion research paints a complicated picture. Many people are curious about AI, and some are excited about its potential benefits. At the same time, trust is not keeping up with the hype.
Several large surveys show:
A global study on trust in AI found that more than half of people are unwilling to trust AI systems, even as use spreads.
Stanford's 2025 AI Index reported that confidence in AI companies to protect personal data and behave ethically has declined since 2023.
News research from the Reuters Institute shows that trust in news remains low in many countries, and that people are especially wary of AI-generated content in journalism.
Industry and policy surveys in the United States show that many people find AI-produced information "not trustworthy" and want stronger guardrails on its use.
So while AI is hyped as game changing, large parts of the public do not fully trust it, and they increasingly do not trust the information environment around it.
That is a dangerous mix. Overhyped claims, uncritical reporting, and opaque deployments can erode trust not only in AI, but in newsrooms, institutions, and democratic processes. Once trust is lost, it is slow to rebuild.
5. Why newsrooms keep amplifying AI overhype
It is easy to blame "the media" in the abstract. The reality is more mundane and more systemic. A few forces push coverage toward hype:
Speed and competition. Tech announcements drop on tight PR schedules, often with pre-packaged talking points and limited time to verify them before publishing.
Access incentives. Outlets that play nice may get early demos, exclusive interviews, or ad campaigns from AI companies. Tougher scrutiny can mean fewer invitations.
Expertise gaps. Many newsrooms do not have enough specialized reporters who understand how models work, how benchmarks are constructed, or how to read technical papers.
Narrative gravity. Stories about "the end of work" or "AI that passes the bar exam" are easier to frame than nuanced pieces on training data curation or evaluation methodology.
Click economics. Fear and awe both sell. "AI might mildly improve your workflow after six months of change management" does not.
These pressures do not excuse lazy reporting, but they help explain why headlines often tilt toward the most dramatic interpretation of an AI announcement.
"As one researcher put it to me, 'The models are impressive, but they are still improv actors, not witnesses under oath.'"
Treating improv actors like expert witnesses is exactly what overhyped coverage does.
6. The cost of letting false or unproven claims slide
Overhype is not just a vibes problem. It has real world costs. When unproven claims go unchallenged, they create pressure in multiple directions:
Policy and regulation. Legislators may rush to regulate imagined future scenarios while underestimating present harms like labor exploitation, data extraction, and disinformation.
Workplace decisions. Companies may lean on AI to justify layoffs or reorganization before tools are ready, using hype as political cover.
Public services. Schools, hospitals, and government agencies may buy into AI products that sound transformative but lack solid evaluation, diverting limited budgets.
Information integrity. If AI-produced content floods news feeds and search results, and newsrooms treat it as authoritative, the line between reporting and synthetic narrative blurs.
Scholars of communications have warned that generative AI, used without clear guidelines, can accelerate misleading information, erode public trust, and normalize synthetic content in official messages.
Allowing AI companies, vendors, and evangelists to define the story with minimal pushback effectively outsources the editorial role. That is the opposite of what independent journalism is supposed to do.
7. How to scrutinize AI claims like an investigator, not a fan
So what would more rigorous coverage - and more skeptical reading - look like? Whether you are a journalist, a policymaker, or just a curious reader, you can apply a simple set of questions when you see a big AI headline.
1. Follow the money
Who is making the claim? A vendor, a consulting firm, a think tank funded by the industry?
What do they stand to gain if the claim is believed? Investment, contracts, regulation that favors incumbents?
If all the quotes in a story come from people with a financial stake in AI hype, something is missing.
2. Ask for the denominator
When you see "AI boosts productivity by 30%," ask:
30% compared to what baseline?
In how many trials or teams?
What happened in the cases that were not included in the press release?
The MIT findings on the failure rate of generative AI projects are a reminder that cherry-picked success stories sit atop a pile of quiet misses.
3. Separate demos from deployments
Is the claim based on a live, in-production system, or a controlled demo under ideal conditions?
How long has it been running?
How many real users have touched it?
A model that aces a staged test may behave very differently when exposed to messy, adversarial, or high stakes environments.
4. Look for benchmarks and independent evaluation
Has the system been evaluated by independent researchers, not only by the company?
Are the evaluation datasets public and well described?
Are there comparisons to non-AI alternatives?
If the story cannot point to specific metrics beyond "better" or "smarter," it is not ready to be treated as evidence.
5. Watch for synthetic sources
As AI-generated content spreads, reporters and readers need to ask:
Are the quotes human or AI-generated?
Are images authentic or synthetic?
Is AI being used to generate reviews, testimonials, or even "expert commentary"?
Without clear labeling, the public can end up making decisions based on a synthetic chorus that looks like consensus.
8. What better AI reporting could look like
Stronger AI coverage does not mean every story must be anti-AI. It means newsrooms take on a clearer mission:
Explain what AI actually is. Not a mind, not a person, but a set of statistical models trained on data, with specific capabilities and limits.
Distinguish between current reality and speculative futures. Use precise language about what is deployed today versus what might be possible in five or ten years.
Center affected communities, not just CEOs and investors. Include workers who must adapt to new tools, students whose work is being scanned, patients whose data is being processed.
Bring in independent experts. Not only company scientists, but critical researchers, social scientists, and practitioners who see the impact on the ground.
Disclose when AI helps produce the story itself. If a newsroom uses AI for transcription or drafting, that should be transparent.
Communication scholars argue that clear guidelines for how AI is used in public messaging are essential if we want to avoid a collapse of trust.
In practice, this might look like prominent AI disclaimers on generated visuals, sidebars explaining hallucination rates when chatbots are quoted, and routine questions about training data, evaluation, and failure modes in every tech interview.
9. The story after the hype
AI is not going away. The hype bubble might deflate, but the underlying tools will keep improving and embedding themselves into workflows, infrastructures, and institutions. Analysts already note that we are moving from a stage of wild expectations to a slower, more difficult phase where return on investment and real risk management matter more than dramatic demos.
The question is whether the narratives that surround AI will mature at the same pace.
If AI continues to be framed primarily as a mythic hero or villain, we will keep making bad decisions: rushing into deployments that are not ready, ignoring real harms in favor of speculative ones, and letting powerful actors write their own report cards.
If instead we insist on treating AI like any other powerful technology - with independent scrutiny, rigorous evidence, and honest reporting about failures and limits - we have a better chance of getting the benefits without losing our ability to tell truth from story.
The overhype of AI in the news today is not just about annoying headlines. It is about who gets to define reality. That is far too important to leave to marketing copy and uncritical coverage.
So the next time you see "AI is about to change everything," consider replying with a quieter, more demanding question:
Show the data. Show the failures. Show the tradeoffs. Then we can talk about the future.


