Congress

Hawley Destroys YouTube Exec on AI Deepfakes: 'Teenage Girl's Face in AI Porn -- What's Her Recourse?' Exec: 'Request for Removal'; Hawley: 'YouTube Making Billions Off This. Artists, Creators, Teenagers Whose Lives Are Upended -- They're Losing'

By HYGO News Published · Updated
Hawley Destroys YouTube Exec on AI Deepfakes: 'Teenage Girl's Face in AI Porn -- What's Her Recourse?' Exec: 'Request for Removal'; Hawley: 'YouTube Making Billions Off This. Artists, Creators, Teenagers Whose Lives Are Upended -- They're Losing'

Hawley Destroys YouTube Exec on AI Deepfakes: “Teenage Girl’s Face in AI Porn — What’s Her Recourse?” Exec: “Request for Removal”; Hawley: “YouTube Making Billions Off This. Artists, Creators, Teenagers Whose Lives Are Upended — They’re Losing”

In a May 2025 Senate hearing, Senator Josh Hawley conducted a devastating cross-examination of a YouTube executive about AI deepfake content on the platform. Hawley: “If a teenage girl’s face ends up in an AI porn video on your platform, what does YouTube do about it? What’s her recourse?” The exec’s response: “We updated our privacy policy so that anybody who believes that their voice or likeness is being used without their authorization can submit a request for removal.” Hawley pushed harder: “Is there some policy for getting reimbursement for profits the company may have made? Does the victim get a share of anything?” The exec couldn’t answer. “Why is it that enforcement of YouTube’s own policy here seems to only happen after videos go viral?” “I do not have the answer to that question.” On Google training data: “You’ve basically clicked the button that wraps in allowing YouTube to give your content to AI and train it without further consent.” Hawley closed: “YouTube is making billions of dollars off of this. The people who are losing are the artists, creators, and teenagers whose lives are upended.”

The Teenage Girl Question

Hawley’s opening question was specific and chilling.

“If a teenage girl’s face ends up in an AI porn video on your platform, what does YouTube do about it?” Hawley asked.

He pressed for specifics: “What’s her recourse right now? What can she do to have to get some recompense, get some restitution?”

The YouTube executive’s response was carefully procedural: “After over a year ago, we updated our privacy policy so that anybody who believes that their voice or likeness is being used without their authorization on our platform can submit a request for removal.”

Hawley’s dissatisfaction was immediate: “A request for removal.”

The exchange captured the core problem. A teenage girl whose face had been manipulated into AI-generated explicit imagery and distributed on YouTube had, according to YouTube’s policy, one recourse: filing a removal request. This implied:

Victim burden: The teenage girl (or her parents) had to discover the content, figure out the process, submit the request, and follow up.

Delayed response: Removal would occur only after YouTube processed the request, with no specified timeline.

No compensation: Removal addressed the ongoing distribution but did nothing about the harm already caused.

No prevention: The policy addressed existing content, not the systematic problem of how such content appeared in the first place.

No accountability: YouTube faced no specific consequences for hosting the content until the moment a removal request was processed.

The teenage victim’s position was unenviable. She had:

  • Been victimized by someone creating the AI content
  • Suffered social and psychological harm from distribution
  • Had to self-advocate to have content removed
  • Received no financial compensation for YouTube’s profits from ads run on the content
  • Had no recourse against YouTube beyond the removal request process

The Profit Question

Hawley pushed on the financial angle.

“Is there some policy in getting reimbursement for any profits the company may have made?” Hawley asked. “Again, if these videos are monetized, I mean, does the victim get a share of anything?”

The executive’s response was evasive: “I’m not aware of those policies. I would have to follow up with you, Senator.”

The question was substantive. YouTube monetizes content through:

  • Advertising revenue shared with content creators
  • Direct subscription revenue (YouTube Premium)
  • Platform advertising not shared with creators
  • Data monetization through Google’s broader advertising systems

When AI deepfake content involving real victims appeared on YouTube:

  • The creator might receive advertising revenue
  • YouTube received its share of advertising revenue
  • Google broadly received data value from viewing patterns
  • The victims received nothing

Hawley’s question implied a policy critique: if content violated users’ rights, should the platform that profited from that content have financial obligation to the victims? Traditional defamation law had developed partial answers to this question for traditional media. Platform law had not caught up to the reality of AI-generated harmful content.

The executive’s “I would have to follow up” response was characteristic of congressional testimony evasions. By claiming unfamiliarity, the executive avoided making either a commitment (which could be used against YouTube) or an admission (which would confirm the unfairness of the current arrangement).

”Only After Videos Go Viral”

Hawley identified a pattern.

“Why is it that the enforcement of YouTube’s own policy here seems to only happen after videos go viral?” Hawley asked. “Is there a reason for that?”

The executive: “I do not have the answer to that question to you.”

Hawley pressed further: “Do you know how many AI-generated deepfake videos or deepfake content is removed before a victim complains? Does the victim have to complain before YouTube does anything?”

The pattern Hawley identified was real. Platforms like YouTube had systematic challenges with harmful content:

  • Massive upload volumes made proactive review impractical
  • AI detection was imperfect and evolving
  • Business incentives rewarded high engagement content (including sometimes harmful content)
  • Reactive moderation (responding to reports) was cheaper than proactive review
  • Viral content created reputational pressure for removal in ways mundane harmful content did not

The result was a systematic bias toward reactive rather than proactive moderation. Harmful content that didn’t achieve viral status could persist for months or years. Only content that generated enough attention to create reputational or legal risk received serious moderation attention.

For victims of AI deepfake content, this pattern meant:

  • Low-viewed content might persist indefinitely
  • Victims had to drive attention to get removal
  • The harm was proportional to visibility, but so was enforcement
  • Obscure victims had less recourse than famous ones

The AI Training Question

Hawley pivoted to a different dimension.

“YouTube training data,” Hawley said. “Has YouTube provided data for you since Google’s Gemini or other AI training programs?”

The executive: “YouTube does provide data in Google training data in accordance with our agreements.”

Hawley followed up specifically: “If an artist uploads music to YouTube, does the company use that music to train AI models?”

The exec: “As I mentioned, we do share data in accordance with our agreements. I can’t speak to the specifics of any individual agreement.”

Hawley raised the protected class: “How are people like Ms. McBride protected? If you’re an artist and you put any content on YouTube, does that mean that it’s just free range? I mean, you can do whatever you want with it?”

The exec: “Again, it goes down to the terms of our agreement. I will say that we have forged deep partnerships with the music industry. We came out of the gate with forming AI music principles with the music industry and are continuing to experiment with them to see how AI can best benefit their creative process.”

Hawley was incredulous: “So, they’re privacy protections? You’re telling me YouTube hasn’t placed privacy protections for artists?”

The exec: “They apply to all individuals on our platform.”

The “Click-Wrap” Scam

Hawley identified the legal mechanism.

“This is the click-wrap scenario,” Hawley said. “This is in order to watch cute dog videos or whatever. You’ve got to click the ‘I consent’ and that wraps in. You basically give consent for your stuff to be used.”

The exec: “There are all different types of various agreements, but our terms of service are included in that batch.”

Hawley pressed: “I guess my question is, where are users told about their privacy protections if they have any and where do they explicitly consent?”

The exec: “Are they agreed to our terms of service and we also have our privacy policy available on the web?”

Hawley laid out the problem: “Okay, so that’s the click-rap. So, in other words, if you come onto YouTube, you want to use it, you got to click through. So, you click it and there you basically agreed to allow YouTube to give your content to AI and allow them to train it without any further consent. Is that basically it?”

The exec confirmed: “Again, we implement our policies in terms of our agreement are what govern, what goes into our training.”

Hawley made the artist case: “Well, and I’m asking you the content of that agreement. So, in other words, if I’m an artist and I upload something to YouTube, and yeah, sure, I’ve clicked a button that says, yeah, I want to be able to use YouTube. Are you telling me that I don’t have any further recourse if YouTube then goes and gives the information to AI models and systems, there’s nothing further I can do?”

The exec’s response was devastating: “If it is in accordance with our agreements, we will share that data.”

The Artist’s Trap

The “click-wrap” dynamic Hawley identified was legally significant but morally problematic:

Technical consent: By clicking “I agree” to Terms of Service, users technically consented to data use.

Practical reality: Terms of Service agreements were typically tens of thousands of words, which users did not read. The “consent” was formal rather than informed.

No alternative: Major platforms had become essential for many activities. Refusing to consent meant being excluded from significant social and economic activity.

One-sided modification: Platforms could modify terms unilaterally with brief notice, and users had to accept or leave.

Scope creep: What platforms said they would do with data at sign-up had evolved dramatically. AI training was not specified in most pre-2020 consent agreements.

For content creators and artists, this created a specific trap. They needed YouTube and similar platforms for distribution. Their terms of service consent included AI training rights. But AI training potentially threatened their artistic livelihood by creating AI systems that could replicate their styles. They had effectively consented, through click-wrap, to funding the systems that would compete with them.

Hawley’s Closing

Hawley delivered his conclusion.

“Yeah, that seems like a big problem to me,” Hawley said. “That seems like a huge, huge problem to me.”

He made the monetization charge: “And the fact that YouTube is monetizing these kind of videos seems like a huge, huge problem to me.”

He acknowledged the exec’s appearance: “I’m glad you’re here today. I wish there were more tech companies here today.”

He issued the call to action: “But we’ve got to do more. I mean, YouTube, I’m sure, is making billions of dollars off of this. The people who are losing are the artists and the creators and the teenagers whose lives are upended.”

He articulated the policy goal: “We’ve got to give individuals powerful and forcible rights in their images, in their property, in their lives back again. Or this is just never going to stop.”

The Policy Landscape

Hawley’s testimony highlighted the policy gap that the TAKE IT DOWN Act (signed by Trump the previous week) partially addressed:

What TAKE IT DOWN Act did: Created federal criminal penalties for non-consensual intimate imagery and platform notice-and-removal obligations.

What TAKE IT DOWN Act did not do: Address underlying AI training data permissions, monetization of harmful content, or proactive moderation requirements.

What remained needed:

  • Compensation frameworks for victims
  • Proactive moderation requirements
  • AI training data transparency
  • Artist rights protections
  • User-controllable data policies
  • Enhanced platform liability frameworks

The fundamental issue was that technology had advanced faster than legal frameworks. AI could generate harmful content at scale; platforms could distribute it globally; monetization could reward creators and platforms; victims had minimal recourse. Legal frameworks designed for analog-era harms could not effectively address digital-era capabilities.

Republican legislators like Hawley and Democratic legislators concerned about technology had found common ground on these issues. Unlike many political topics, AI harms and platform accountability created opportunities for bipartisan legislation. Whether such legislation could pass against technology industry opposition was a different question.

The Broader Context

The YouTube executive’s testimony occurred against the backdrop of:

AI deepfake proliferation: Tools for generating realistic fake imagery had become widely accessible.

Teen suicide cases: Multiple documented cases of teenagers taking their own lives after being victimized by deepfake content distributed at school or online.

Celebrity targeting: High-profile victims including Taylor Swift (whose deepfakes had generated massive traffic earlier) had brought attention to the issue.

AI music generation: Generative AI music models trained on copyrighted songs threatened the music industry.

News organization concerns: News organizations had sued various AI companies for using their content for training without compensation.

Artist licensing disputes: Visual artists, writers, and musicians had begun coordinating legal action against AI training without consent.

The YouTube executive’s defensive posture reflected the broader tech industry approach: acknowledge problems exist, cite existing policies (however inadequate), claim collaborative improvement, resist specific accountability measures.

Key Takeaways

  • Hawley: “If a teenage girl’s face ends up in AI porn on YouTube, what does YouTube do?” Exec: “Submit a request for removal.”
  • Enforcement pattern: “Only happens after videos go viral.” Exec: “I do not have the answer to that question.”
  • AI training via click-wrap: “You click the consent to watch dog videos and YouTube gives your content to AI without further consent.”
  • Artist trap: Click-wrap agreements included AI training rights; artists couldn’t access platforms without agreeing.
  • Hawley closing: “YouTube is making billions. The artists, creators, and teenagers whose lives are upended — they’re losing.”

Watch on YouTube →