YouTube has announced it will soon start removing AI-generated videos that imitate identifiable individuals, including their voice or likeness. The update, shared in a new policy statement, is expected to take effect in the coming months, likely starting in 2024.
The decision comes amid rising concerns about deepfake videos and AI recreations, especially in the music industry, where artists’ voices are increasingly being replicated without consent. However, YouTube clarified that the removal of such content will not happen automatically, affected individuals or artists will need to submit formal requests for takedown.
According to YouTube, the policy change was informed by feedback from creators, viewers, and artists who raised concerns about how new technologies are being used to mimic people without permission or to misrepresent their views. The company acknowledged that synthetic content, especially when it involves a person’s face or voice, can cause serious harm if used irresponsibly.
Policy Criteria and Limitations
YouTube emphasized that not every AI-generated video will be removed. Each takedown request will be reviewed based on specific factors. For instance, content that clearly qualifies as satire or parody, or that features public figures or officials, may be held to a higher standard before removal is approved.
The platform also revealed plans to allow music partners, such as labels or distributors, to request the removal of AI-generated music that mimics an artist’s distinctive singing or rapping voice. These requests will initially be open to participants in YouTube’s early AI music programs, with broader access to be rolled out gradually.
When evaluating removal requests related to music, YouTube will consider whether the content is part of news coverage, analysis, or critique of AI-generated vocals.
Mandatory AI Disclosure Coming Soon
In addition to the takedown policy, YouTube will soon require creators to disclose when their videos include synthetic or manipulated content made with AI that appears realistic. This includes fabricated events or altered depictions of individuals saying or doing things they never actually did.
YouTube says this rule is particularly important when videos touch on sensitive issues, such as elections, wars, public health emergencies, or political figures.
According to YouTube executives Jennifer Flannery O’Connor and Emily Moxley, this move is intended to promote transparency and prevent the misuse of AI in misleading or potentially harmful ways. The disclosure requirement, like the removal policy, is expected to be implemented sometime next year.