I used AI to analyze YouTube’s algorithm... and here’s why your news content might be getting "shadowbanned" 👇
Before we get into it, I post a lot of AI X YouTube threads Bookmark to come back and re-read this thread in the future 👇
1. Researchers ran the most detailed YouTube audit ever They built over 100,000 fake users (sock puppets) - bots trained to simulate real people across 5 ideologies: - very-left - left - center - right - very-right Each bot watched 100+ politically aligned videos, then followed YouTube’s recommendations - homepage, autoplay, everything
2. Brand new accounts see mostly center & left-leaning news Researchers created fresh, clean accounts with no watch history The result? → Homepage recommendations were mostly Center or Mainstream Left → Far Right or Far Left content was nearly invisible Even without clicking anything, YouTube already nudges you Left
3. Once you train the algorithm, bias shows up - fast Trained right-leaning bots saw more ideologically consistent recommendations Trained left-leaning bots did too - but there’s a catch: 📊 It took 27 videos to shift a bot into Far Left territory ⚠️ But just 2 videos to push it out of Far Right YouTube actively resists right-leaning content... and clings longer to left-leaning patterns.
4. Fake news is NOT what’s driving this bias To test the “misinformation excuse,” the researchers ran a fake-news detection model on 11.5 million videos What they found: - Fake/misleading content was negligible - It was evenly distributed across Left and Right So the suppression isn’t about trust or truth, it's about algorithmic preference
5. Homepage recommendations are ideologically “congenial” Once bots were trained, the homepage became an echo chamber: - Right bots got mostly right content - Left bots got mostly left content - Far-right bots saw the most ideological consistency BUT: the algorithm still injected occasional cross-cutting content Left bots saw some Right videos Right bots saw almost none from the Left
6. Autoplay pushes right-leaning users deeper Researchers followed YouTube’s “up-next” trail - autoplay without user clicks - For right-leaning bots: Recommendations got more ideologically extreme as autoplay continued Exposure to “very-right” videos increased by 37% over 20 recs - For left-leaning bots: Exposure stayed flat No meaningful rise in Far Left content Autoplay = passive radicalization path… but only in one direction
7. Problematic channels do show up - but not in huge numbers Researchers used existing lists of “problematic” channels (alt-right, conspiracies, QAnon, etc.) Results: - These made up only ~2.5% of total recommendations BUT: over 36% of users encountered at least one Very-right bots saw them the most, especially deeper in the autoplay trail
8. Creators don’t need a strike to lose reach YouTube doesn’t need to ban you to bury your content The papers show 3 ways your reach can vanish: - You’re not shown on the homepage - You’re excluded from autoplay trails - New users never discover your niche No warnings - No flags Just algorithmic invisibility
9. Even mainstream news channels weren’t safe They trained bots only using Fox News or MSNBC videos Guess what? Fox-trained bots saw recommendations from alt-lite (hard right) and conspiracy channels MSNBC-trained bots mostly saw mainstream or neutral content Same starting point Different algorithmic future
10. Bottom line: YouTube isn’t banning political content - it’s nudging the narrative Its algorithm favors: - Center & Left news by default - Suppresses Far Left & Far Right - And pulls users away from right-leaning niches faster and more aggressively Your content isn’t broken The playing field is
tl;dr: YouTube isn’t outright banning political content It’s shaping what people see - subtly, consistently, and at scale → Left and Center content is algorithmically favored → Right-leaning creators face faster decay and limited discovery → The deeper you go, the more filtered your feed becomes No strikes or bank, just "silence"
If you wanna look deeper in the research papers: 1. Auditing YouTube's Recommendation System for Ideologically Extreme Content" (2022) - Authors: Magdalena Wojcieszak, et al. 2. YouTube's Recommendation Algorithm Is Left-Leaning in the United States (2023) - Authors: Manoel Horta Ribeiro, et al.
If you got up to this point and still have not followed, you are about to experience 7 years of extreme bad luck See you in tomorrow's thread
And as always, join the newsletter for more AI x YouTube content - http://www.ytdojo.io

