top of page

Why Social Feeds are now filling up with AI Influence.

  • Writer: The fyi Lab Team
    The fyi Lab Team
  • Nov 4
  • 5 min read

Updated: Nov 15

Robot casting light onto social media logos

If your scroll feels weird lately, you are not imagining it. The internet did not suddenly get more creative, it just got cheaper to produce. AI now drafts the text, paints the photos, and even spits out full videos. Some accounts are 100% synthetic people who never existed. Many others are real creators, but their posts, captions, thumbnails, and even faces are heavily AI-assisted. Fact is social feeds are filling up with fakes.


This is not a niche tech story. It matters to anyone who buys things, votes, or shares posts with family. Below is what is actually happening, what the best numbers say, and how to stop getting fooled.

The Instagram “people” who are not people


You have seen them: flawless trips, flawless skin, flawless everything. In mid-2025, an AI influencer named Mia Zelu drew mainstream attention after “attending” Wimbledon. She did not. She is a synthetic persona. Even so, she grew quickly into the six-figure follower range over the summer and has continued to climb. The bigger point: these accounts no longer look like cartoons. They pass at a glance, and many users do not notice or do not care.


Why this matters: a synthetic face can move real money and opinions. When a not-real person says “this mascara changed my life” or “this candidate is the only honest one,” there is no actual experience behind it. There is a prompt, an editing pass, and a publishing schedule.


LinkedIn and Reddit: the “ordinary” internet is machine-written too


On LinkedIn, a large 2024 analysis estimated that about 54% of longer posts (100+ words) were likely AI-generated. That tracks with the product itself, which ships writing helpers that make posting faster. A lot of “wisdom” on LinkedIn starts as a machine draft. Treat it like a polished brochure, not a diary.


On Reddit, detector studies show a steady rise in AI-written posts. One rollup found roughly 13% of posts in 2024 were likely AI-generated, with higher spikes in writing-heavy communities. Follow-up analysis through 2025 shows the direction is still up. Detectors are not perfect, but the trend is clear.


What this means for you: a smooth paragraph from a throwaway account should not auto-earn your trust, especially if it is giving health, finance, or political advice. Read it like an ad unless it cites something solid.


YouTube’s new normal: AI-only channels can still go viral


In July 2025, an analysis of channel growth found that 9 of the 100 fastest-growing YouTube channels that month were posting only AI-generated videos. That is not fringe. It proves the fully synthetic format can grow fast if it hits the right topics and thumbnails. YouTube says it is working to reduce repetitive, low-quality spam, but growth stories keep popping up.


What it means: when a channel uploads daily with the same cadence, same voice, and the same uncanny edits, assume automation. If you are about to share it as “proof,” find a better source first.


The flood behind the feed: AI “news” sites


There is also a background problem: a growing swamp of low-effort “news” sites that are mostly AI-written. Independent tracking shows more than two thousand AI-generated news and information sites operating with little or no human oversight. Those posts get scraped and amplified by social algorithms like anything else, which makes your feed look legit when it is not.


Why this matters: when a friend shares a slick-looking “local news” link you have never heard of, there is a real chance it is a content farm. If the source layer is polluted, the feed is polluted.


“But platforms label AI now, right?” yes, and it is not enough


Major platforms rolled out wider AI labeling in 2024. Labels rely on metadata from creative tools, watermarks, and some platform detection. The issue is visibility and design. Small, quiet labels rarely change behavior. Bigger, obvious labels help, but platforms use them less often because they add friction. That is the tension: clarity versus clicks.


Academic work in 2025 points to the same conclusion. Labels can help, but small badges do not meaningfully change what people believe or share. If labels are going to work, they need to be more obvious and appear earlier in the experience. Until then, do not outsource your judgment to a tiny gray tag under a username.


A fast rumor that needs to be verified


You have probably seen this line: “71% of images on social media are AI-generated.” It appears in slides and posts across the web. There is no primary, platform-level source behind it. It is a catchy number, not a verified measurement. Treat it as unverified until a platform or independent audit publishes real share-of-content data.


So how fake is it, really? The short, honest answer


  • LinkedIn: around 54% of longer posts were likely AI-generated as of late 2024.

  • Reddit: about 13% of posts in 2024 were estimated as AI-generated; some communities are much higher; trend rising through 2025.

  • YouTube: 9 of the top-100 fastest-growing channels in July 2025 were AI-only.

  • The wider web behind social: more than two thousand AI “news” sites are already live and growing.

  • Labels: real but often small; impact depends on how prominent they are and when they appear.

  • Everything else like global “most images are AI” claims is guesswork.


How not to get fooled (simple, real-world checklist)


  1. Pause on perfect. If a person, product, or place looks flawless in every post, assume heavy editing or that the “person” is not a person. Check the bio. Many synthetic accounts disclose it. If you see an “AI-generated” label, tap to read what it actually means.

  2. Click through the source. If a post makes a strong claim (health, finance, politics), follow the link. If the site is unknown, search for basic credibility checks or look for coverage by outlets you recognize. If it is a farm, you will usually find out fast.

  3. Scan the profile. New account, no tagged friends, same lighting and vibe across dozens of photos, and captions that read like a template: assume automation.

  4. Watch the pace. Daily uploads with the same formula and voice are a red flag on YouTube or TikTok. It does not make the content evil just unreliable as “evidence.”

  5. Treat detectors as a hint, not a verdict. Detection tools give probabilities and can be wrong. Use them to guide healthy skepticism, not to accuse people.

  6. Fact before virality. If a post hits your emotions hard fear, outrage, miracle cures that is your cue to slow down. Viral does not mean verified.

  7. For brands and creators (because you are part of this too)

  8. If you use AI, say so. Audiences do not like being tricked. Clear disclosure is better than a call-out later.

  9. Do not let the model write everything. Use AI for drafts or visuals, then edit like a human who cares. That is how you keep trust.

  10. If you hire AI “talent,” set rules. Require a label on the creative, avoid unrealistic body norms, and do not use AI “reviews” for products people need to actually test. That is the line between clever and gross.


Bottom line


Social media did not become more honest. It became automated. Real people still make great stuff, but now they sit next to AI personas and AI-only channels designed to crank out content on a schedule. Labels help a little, but not enough. Your best defense is to slow down, check sources, and remember that the most polished post is often the least reliable.

bottom of page