Nov 15

YouTube to Require Disclosure When Videos Include Generative AI

YouTube, the video platform owned by Google, a subsidiary of Alphabet Inc., will soon require video creators to disclose when they upload processed or synthetic content that looks realistic, including videos created using artificial intelligence tools.

The policy update, set to take effect in the new year, may apply to videos that use generative artificial intelligence tools to realistically depict events that never happened or show people saying or doing things they didn't actually do.

"This is especially important in cases where content discusses sensitive topics such as elections, ongoing conflicts, crises in public health, or government officials," the company said.
When content is processed or generated digitally, creators must select an option to display a new warning label on the YouTube video description panel.

For certain types of content on sensitive topics, such as elections, ongoing conflicts, and crises in public health, YouTube will display the label more prominently on the video player.

The company stated that it will work with creators before the rules come into effect to ensure they understand the new requirements, and is developing its own tools to detect rule violations.

YouTube also commits to automatically flagging content created using its own artificial intelligence tools for authors.

Google, a company that both produces tools capable of generating AI content and owns platforms that can distribute such content globally, is facing new pressure to responsibly implement this technology.

Kent Walker, the company's Chief Legal Officer, published a blog post outlining the "Responsible AI Opportunities Program," an official document with policy recommendations aimed at helping governments worldwide navigate developments in artificial intelligence.

"Responsibility and opportunity are two sides of the same coin," Walker said in an interview. "It's important that even as we focus on the responsible side of the narrative, we don't lose sight of the excitement and optimism about what this technology can do for people around the world."

Like other user-generated media services, Google and YouTube are under pressure to curb the spread of misinformation on their platforms, including falsehoods about elections and global crises like the COVID-19 pandemic.

Google has already begun addressing concerns that generative AI could create a new wave of misinformation, announcing in September that it would require "significant" disclosure for AI-generated political ads.

The company also stated that YouTube community rules prohibiting digital content manipulation that could pose a serious risk of public harm already apply to all video content uploaded to the platform.

Venture Firms Commit to Voluntary Principles for AI Startups
The new guiding principles are part of efforts to establish some barriers for potentially thousands of startups in the artificial intelligence industry.
About three dozen venture capital firms have signed a series of voluntary commitments, considering feedback from the Biden administration on how many startups they support should responsibly develop artificial intelligence technologies.

The Responsible Innovation Labs (RIL), a non-profit coalition of investors and technical leaders, plans to release detailed information about the new guiding principles, which include commitments to provide organizational support for startups in responsible AI, forecast AI risks and benefits, and conduct audits and testing to ensure product safety.

In accordance with President Joe Biden's recent executive order on artificial intelligence, the Department of Commerce was tasked with establishing assessment standards for AI companies. However, these mandatory assessments will apply only to firms creating the most powerful AI models.

The new voluntary guiding principles, while open-ended, are part of efforts to establish some barriers for potentially thousands of startups in the artificial intelligence industry.
Among the 35 firms initially signing the commitments are General Catalyst, Felicis Ventures, Bain Capital, IVP, Insight Partners, and Lux Capital.

In addition to the recommendations, RIL is also releasing a more detailed protocol for startups and investors seeking to implement these principles, including suggestions on how to structure responsible AI teams, conduct security testing, and engage with external stakeholders.
"It adds a kind of additional level of hygiene to how you manage investments," said Ganesh Bell, managing director at Insight Partners. "Given our investments in infrastructure and AI companies, we know that it's really important to build trust. It's important to ensure safety."

Commerce Secretary Gina Raimondo said the announcement reflects Biden's comprehensive approach aimed at not leaving a stone unturned to leverage AI opportunities while protecting people from associated risks. She noted that voluntary agreements demonstrate important leadership from the private sector.

As these standards are not mandatory, compliance will depend on venture capitalists and startups.

Gaurav Bansal, Executive Director of RIL, expressed hope that companies will adhere to the new principles. "We think founders will keep an eye on ensuring that signatories live up to their commitments in front of venture capital, and vice versa," he said.
Created with