The Blanket Fort Series Ch.3

Not All Streaming Is Created Equal: YouTube & the AI Problem

What Every Parent Needs to Know About YouTube Right Now

Many parents have a memory that goes something like this: we needed twenty minutes. Dinner wasn't going to make itself, or we were on a work call that ran long, or we just needed to sit down for a second. So we handed over a tablet with YouTube pulled up, found a Bluey playlist, and thought: fine, this is fine.

And for a while, it was.

YouTube had a moment as the new Saturday-morning cartoons of. It was a place where kids could find the shows they loved, craft tutorials, funny animal videos, and Minecraft walkthroughs. We understood the basic deal. We didn't love the ads; we kept half an eye on them, and mostly it was okay.

That deal has changed. And most of us haven't gotten the update.

What's Actually Happening on YouTube Right Now

Here's the part nobody put in a push notification:

The internet is being flooded with AI-generated content at a scale that no moderation system was built to handle. Cheap, algorithmically optimized, and sometimes genuinely disturbing videos are pouring onto platforms faster than human moderators can review them. YouTube is not exempt. In fact, because of its scale. There are over 500 hours of content uploaded every single minute, and it may be the most affected platform our kids are on.

This isn't a glitch. It's a structural problem, and it has two layers:

Layer one is volume. The sheer quantity of AI-generated content has overwhelmed content moderation teams. Videos that would have been flagged and removed a few years ago are slipping through because the system isn't designed for this pace. "Kid-friendly" as a filter no longer means what it used to.

Layer two is the algorithm. YouTube's recommendation engine isn't optimizing for what's good for our kids. It's optimizing for watch time and “engagement”. Those are not the same thing. The algorithm doesn't know or care that we'd prefer our seven-year-old stay on Bluey. It knows that slightly more stimulating content keeps kids watching slightly longer. And so it nudges. And nudges. And nudges.

Jonathan Haidt has spent years documenting what happens to children when they're handed over to algorithmically-driven platforms without guardrails. The core finding isn't complicated: the algorithm is not on our side, and it was never designed to be. It is a business tool optimized for “engagement”. Our children's well-being is not part of the equation.

A Jury Just Confirmed What We Suspected

In March 2026, a California jury found Meta and YouTube negligent and ruled that their platforms were deliberately engineered to be addictive, that executives knew it, and that they failed to protect children anyway.

Internal documents shown during the trial revealed that Meta executives described their strategy to attract kids as young as 11, noting that those users were 4 times as likely to keep returning to Instagram as users of competing apps. One memo read simply: "If we wanna win big with teens, we must bring them in as tweens."

The plaintiff's lead attorney put it plainly: "How do you make a child never put down the phone? That's called the engineering of addiction."

This wasn't a fringe argument anymore. A jury of twelve people heard five weeks of testimony from therapists, engineers, tech executives, including Mark Zuckerberg himself, and said: yes, this was deliberate. The case is already being compared to the 1990s Big Tobacco litigation. Thousands of similar lawsuits are waiting in the pipeline.

The Three-Click Problem

Here's what this looks like in practice.

A child opens YouTube and finds a Paw Patrol video. That's fine. Then autoplay rolls to a Paw Patrol fan animation, which is still mostly fine, a little weird. Then a video of someone screaming while playing a Paw Patrol game that has disturbing distortions, and now we're lost.

Three clicks. Sometimes fewer.

This isn't a theoretical risk. Parents in online groups describe it constantly. Paediatricians are seeing it in their offices. The pathway from something innocuous to something a young child should never see can be shockingly short on a platform this large.

And here's the part that makes it harder: our kids often don't come and tell us. Either because they don't have the words for what they saw, or because they sensed they'd get in trouble, or because the compulsion loop already has them and they want to keep watching. They process it alone. And that's when the real damage happens.

Dr Becky Kennedy talks about the importance of staying in our kids' inner world and being the person they come to when something feels confusing or scary. But we can only be that person if they feel safe enough to come to us. A child who watched something disturbing on YouTube and felt ashamed about it is a child who's processing that alone. The platform created the exposure. The shame seals it in.

YouTube Kids: Better, But Not a Solution

A lot of us moved to YouTube Kids when we started feeling uneasy about regular YouTube. And YouTube Kids is genuinely better. The content pool is smaller, there's no comment section, and the ads are more restricted.

But "better" and "safe" aren't the same thing.

The same AI content problem affects YouTube Kids at a smaller scale, but it's there. The moderation is more active, but the volume of flagged content that makes it through remains nontrivial. And the algorithmic logic is the same: optimize for watch time, not for children's development.

YouTube Kids is a harm-reduction tool, not a solution. It requires the same active supervision as regular YouTube. If we're handing kids YouTube Kids and walking away, we're getting less risk, but we're not getting safety.

This Isn't About Fear. It's About a Better System.

Here's where we want to be clear: this isn't a call to terrify ourselves or institute a screen-time crackdown that makes everyone miserable. The goal isn't fear. The goal is a better system.

The good news is that genuinely good alternatives exist: platforms that were actually built for children, don't use engagement-maximising algorithms, and don't depend on us to vigilantly supervise every second.

PBS Kids Video App Free, high-quality, no ads, no algorithm. The content is curated by humans whose goal is children's development. This is the closest thing available to Saturday morning cartoons with a finite, trustworthy menu. It doesn't try to keep kids watching as long as possible. It just shows them good shows.

Khan Academy Kids: Educational, curated, no rabbit holes. Genuinely engaging for younger kids without the compulsion-loop design.

Toca Boca Apps: Open-ended creative play. No ads, no algorithm, no "up next." Kids build, create, and stop when they're done.

Disney+, Netflix, Apple TV Kids Profiles: Curated libraries with a much smaller and more controlled content pool. Still benefits from supervision, but the algorithmic nudge problem is significantly reduced. A show ends. The next one loads. There's no stranger's video lurking three clicks away.

Downloaded Content: Buy or rent specific shows and keep them offline. No algorithm involved whatsoever. This is genuinely the most controlled option for young kids, which is the digital equivalent of a DVD shelf.

Practical Steps for Right Now

Turn off autoplay. This is the single highest-leverage change. Autoplay is how the algorithm does its work. Without it, each video is a choice, not a slide.

Build a playlist in advance. If YouTube is happening in your house, spend fifteen minutes building a playlist of approved content before screen time starts. Give them the playlist, not the platform.

Stay in the room. Not hovering but present. The algorithm behaves differently when a parent is watching, too.

Set the done point before they start. "We're watching two videos from this playlist, then we close it." The done point needs to exist before the loop starts, because it will never come from inside the app.

Have the conversation without the screen in front of them.

"I want to tell you something about YouTube. It's designed to keep you watching by always suggesting something a little more exciting than what you just watched. That's not your fault, it's how it's built. That's why we have rules about it. It's not because I don't trust you. It's because I don't fully trust the app."

Even young kids can understand this at a basic level. And naming it removes the shame from the equation.

If They've Already Seen Something

It happens. Despite our best systems, kids see things they shouldn't.

Dr Becky's framework here is straightforward and worth keeping close: our job isn't to make the moment not have happened. Our job is to stay regulated ourselves, stay connected to our kids, and create the conditions where they feel safe enough to tell us.

If our child comes to us with something disturbing they saw:

  • Lead with curiosity, not alarm. Our reaction teaches them whether it's safe to come to us next time.

  • Don't interrogate. "Tell me what you saw" is less useful than "How did that make you feel?"

  • Name what happened to the platform, not to them. "That video shouldn't have been on there. That's a problem with the app, not with you."

  • Close the loop. "I'm really glad you told me. That's exactly what I want you to do."

The goal isn't to prevent every hard thing. The goal is to be the person they come to when hard things happen.

A Note on Other Caregivers

We're not the only adults in our kids' lives, and we can't control every screen in every house. A grandparent, a babysitter, a friend's parent — YouTube gets opened. It happens.

A simple script that doesn't require a debate:

"We're taking a break from YouTube right now because of some safety concerns with AI content. Here are the shows and apps we're comfortable with: [list]. Thanks so much for understanding."

No lecture needed. No convincing required. We're just giving the information and moving on.

The Bottom Line

YouTube was built for adults. Its moderation was built for a pre-AI content world. Its algorithm was built to maximize engagement, not to protect children.

None of that makes us bad parents for having used it. We were working with the information we had. But we have new information now, and that changes what the responsible move looks like.

We don't have to throw out screens. We just have to be pickier about which screens, in the same way we're pickier about which playgrounds, friend groups, and movies get watched on a Friday night. Curation is parenting. It always has been.

We just have a few more platforms to curate than our parents did.

Next up: Co-Viewing and how watching together turns passive screen time into one of the most powerful connection tools we have.


Next
Next

The Blanket Fort Series Ch. 2