
We all need to be better informed about AI.

Learn about AI and how our research is done.
The AI Industry in Numbers
2022
Year LLM's Went Mainstream
$760b
2025 Total AI
Market Size
$2.1t
2030 Projected Total AI Market Size
$392b
2025 Investments in AI Companies
29%
AI Growth Rate (CAGR)

No industry in history has ever has grown this fast!
( and that probably is not be a good thing )
TL;DR
-
AI = software that imitates parts of human intelligence (perception, language, prediction, decisions).
-
Today’s AI is Narrow: great at specific tasks; unreliable outside its lane.
-
AGI (human-level across most tasks) and Superintelligence (beyond the best humans) do not exist today.
-
What counts as “AI”? Systems that either learn from data (machine learning, deep learning) or reason over explicit knowledge/rules (expert systems, planners). Many real products mix AI with regular software (databases, APIs, UI).
The Three Types of AI
1) Narrow AI (ANI, “weak AI”) — What exists today
-
What it is: Models specialized for a task or a small family of tasks.
-
Everyday examples: chat & writing assistants, image/video generators, translation, recommendation feeds, fraud/spam filters, speech recognition, medical image triage (regulated), warehouse robots, driver-assist in limited areas.
-
Strengths: Fast, scalable, often superhuman pattern speed.
-
Limits: Can make confident mistakes; brittle outside training; weak on multi-step, real-world reasoning; needs oversight.
2) AGI (Artificial General Intelligence) — Not here yet
-
What it is: A system that can perform any cognitive task a typical human can, adapt across domains, and learn new things quickly with reliable reasoning.
-
Status: No agreed-upon benchmark has been passed. Today’s systems still fail on long-horizon reasoning, consistent reliability, and open-ended learning.
3) Superintelligence (ASI) — Hypothetical
-
What it is: Intelligence far beyond top human experts across science, strategy, and engineering.
-
Why it’s discussed: Enormous potential impact → major safety, governance, and misuse questions.
-
Status: Speculative; not achieved.
What is AI today vs. what is not AI
Common things that are AI
-
Learning-based models: large language models (LLMs), multimodal models, diffusion/transformer media models, classical ML (logistic regression, random forests, gradient boosting), reinforcement learning.
-
Rule/knowledge systems: expert systems, planners, knowledge graphs with reasoning.
-
Hybrids: AI that uses tools (search, code execution, databases), retrieval-augmented generation (RAG), and safety filters.
Things that aren’t AI (on their own)
-
Plain automation: macros, mail merges, if-this-then-that workflows, cron jobs, RPA scripts that never learn.
-
Deterministic plumbing: SQL databases, caches, message queues, web servers, regexes, sorting algorithms.
-
Basic analytics/viz: dashboards, pivot tables, static reports (useful—but not “intelligent” without a model making inferences).
-
These “non-AI” parts still matter: they’re the scaffolding around AI inside real products.
How to spot real AI vs. marketing spin
Use this quick test when you read claims on product pages or press releases:
Call it AI if the component:
-
Learns from data or uses an explicit reasoning engine, and
-
Makes non-trivial predictions/inferences/decisions (not just fixed scripts), and
-
Generalizes to new inputs within its domain.
Be skeptical if the component:
-
Runs only a fixed set of rules with no learning,
-
Never updates its behavior with new data,
-
Merely formats, stores, or routes information.
Common misconceptions (fast facts)
-
“Fluent text = understanding.” Not necessarily—current models can sound right while being wrong (hallucinations).
-
“Any chatbot is AI.” Many are scripted; true AI chat uses learned models or reasoning engines.
-
“Rules aren’t AI.” They are a classic branch (“symbolic AI”), just not learning-based.
-
“AGI is here now.” As of November 3, 2025, no consensus, human-level, all-domain benchmark has been passed.
Practical guide: when should you use AI?
-
Good fit: pattern recognition at scale (classify, summarize, translate), content generation, forecasting with lots of historical data, ranking/recommendations, anomaly detection.
-
Handle with care: safety-critical decisions, long-horizon planning, tasks requiring reliable factual accuracy—keep a human in the loop and log outputs.
-
Not a fit: places where simple rules or queries solve it cheaper, faster, and more reliably.
Mini-glossary
-
Machine Learning (ML): Systems that learn patterns from data.
-
Deep Learning: ML with multi-layer neural networks (e.g., transformers).
-
LLM (Large Language Model): A deep model trained on text to predict words; can chat, write, and code.
-
Multimodal: Handles more than one input type (text + images + audio + video).
-
RAG (Retrieval-Augmented Generation): A model that looks up trusted sources or your documents while generating answers.
-
Hallucination: A confident but incorrect output from a model.
-
Agent: An AI system that can call tools, browse, write code, or take multi-step actions under constraints.
Claims Evaluation Framework
We assess transparency, credibility, risk and BS within the AI industry. We do this through our strategic framework methodology below.
We track claims made by AI and tech-adjacent companies, their partners, and influencers. Our goal is simple: help people tell the difference between confident marketing and verifiable reality.
How we look at a company
We review each company through eight easy-to-understand lenses. Think of these as the questions we ask before we add something to the Watchlist.
Strategic Relevance
-
Does this company clearly fit our mission, e.g., heavy hype, influencer over-reach, or a business model that deserves a closer look?
Business Model Transparency
-
Can regular people understand how the company makes money? Are pricing and revenue sources explained clearly?
Product / Service Credibility
-
Does the product do what it says on the tin? We look for independent tests, customer experiences, and any documented issues.
Growth & Market Position
-
How big is the company today and how fast is it growing? We prefer dated facts (not vibes).
Governance, Ethics & Safety
-
Are there visible policies for safety, privacy, or responsible AI? Any audits or transparency reports?
Risk & Exposure
-
What could go wrong? Regulation, reputation, operations, or finances, if there’s a real-world risk, we call it out.
Media / Public Perception
-
How are they talked about across major publications and social platforms? We care about tone, recency, and balance.
Claims & Contradictions
-
We track what a company says over time. If two statements don’t line up, and both are dated and verifiable—we highlight that.
Evidence, not guesswork
-
Dates matter. Every claim or contradiction we publish is tied to a date that appears on the source itself.
-
Multiple sources. We prefer primary documents and at least two independent references.
-
Receipts kept. We store quotes and snapshots so readers can see exactly what we saw.
Our Red-Flag Meter
To make things easy, we roll our findings into a simple signal:
-
Green (Monitor Lightly): Mostly consistent, minor concerns.
-
Amber (Watch Closely): Several questions or mixed signals, worth attention.
-
Red (High Risk): Serious red flags or repeat contradictions on core claims.
If we uncover a confirmed regulatory action or credible fraud case, we treat that as a hard red flag.
What we don’t do
-
We don’t publish rumor, hearsay, or undated screenshots.
-
We don’t make personal attacks. Our focus is on documented statements and actions.
-
We don’t accept “trust us” as evidence.
How we update
Companies change. When new, dated information appears, we revisit our notes and update the record. If we get something wrong, we correct it—clearly and quickly.
How you can help
If you spot something we should review, send us:
-
A short description of the claim,
-
A link to the original source,
-
The date visible on that page or document,
-
Any additional sources that support or contradict it.
Why this matters
Clear information helps everyone, customers, employees, creators, and investors make better choices. Our job is to slow things down just enough to check the facts, so the loudest voice isn’t always the one people follow.
