YouTube on Monday announced it will give creators more choice over how third parties can use their content to train their AI models. Starting today, creators and rights holders will be able to flag for YouTube if they’re permitting specific third-party AI companies to train models on the creator’s content.
From a new setting within the creator dashboard, YouTube Studio, creators will be able to opt into this new feature, if they choose. Here, they’ll see a list of 18 companies they can select as having authorization to train on the creator’s videos.
The companies on the initial list include AI21 Labs, Adobe, Amazon, Anthropic, Apple, ByteDance, Cohere, IBM, Meta, Microsoft, Nvidia, OpenAI, Perplexity, Pika Labs, Runway, Stability AI, and xAI. YouTube notes these companies were chosen because they’re building generative AI models and are likely sensible choices for a partnership with creators. However, creators will also be able to select a setting that says “All third-party companies” which means they’re letting any third-party train on their data — even if they’re not listed.
Eligible creators are those with access to the YouTube Studio Content Manager with an administrator role, the company also notes. They’ll also be able to view or change their third-party training settings within their YouTube Channel settings at any time.
Following the rise of AI technology, and particularly AI video like OpenAI’s Sora, YouTube creators complained that companies like Apple, Nvidia, Anthropic, OpenAI, and even Google itself, among others, have trained AI models on their material without their consent or compensation. YouTube this fall said it would address this issue in the near future.
But while the setting’s addition controls access by third parties, the company tells TechCrunch that Google will continue to train its own AI models on some YouTube content in accordance with its existing agreement with creators. The new setting also doesn’t otherwise change YouTube’s Terms of Service which prohibits third parties from accessing creator content in unauthorized ways, like scraping, for example.
Instead, YouTube sees this feature as the first step towards making it easier for creators who want to permit companies to train AI on their videos, and perhaps as a way to be compensated for that training. In the future, YouTube will likely tackle the next step of this process by allowing the companies creators have authorized to access direct downloads of their videos.
With the feature’s introduction, the default setting for all creators will not allow third parties to train on their videos, which makes it more explicit to companies who have already done so that they did this against the creators’ wishes.
YouTube was unable to say if the new setting could have any sort of retroactive impact on any third-party AI model training that has taken place. But the company says its Terms of Service indicates that third parties cannot access creator content without authorization.
The company first unveiled its plans to offer creator controls for AI training in September, when it also announced new AI detection tools that aimed to help creators, artists, musicians, actors, and athletes from having their likenesses, including their faces and voices, copied and used in other videos. The detection technology would expand upon YouTube’s existing Content ID system, which previously focused only on copyright-protected material, the company explained at the time.
Creators globally will be alerted to the new feature via banner notifications in YouTube Studio on desktop and mobile over the next few days.
Separately, Google’s AI research lab DeepMind announced a new video-generating AI model, Veo 2, on Monday, which aims to rival OpenAI’s Sora.