top of page
Writer's pictureInside Audio Marketing

Proposed Creation Of Federal Transparency Guidelines For AI Has NAB Backing.


It is not just politicians who face the prospect of deepfakes. So do other high-profile personalities, including broadcasters, journalists, actors and artists. That is one reason the National Association of Broadcasters is one of the supporters of an effort in Congress to create new federal transparency guidelines for marking, authenticating and detecting AI-generated content.


Supporters say it would protect creators against AI-driven theft and hold violators accountable for abuses.


The proposed Content Origin Protection and Integrity from Edited and Deepfaked Media Act, or COPIED Act, directs the National Institute of Standards and Technology, in consultation with the U.S. Patent and Trademark Office and the U.S. Copyright Office, to develop voluntary standards and for detection of synthetic content, watermarking and content provenance information, including evaluation, testing and cybersecurity protections. Federal agencies would also work to help in the development of technologies to label and detect deepfakes, as well as launch a public education campaign.


The bill would also require developers and deployers of AI systems and applications used to generate synthetic content to give users the option to attach content provenance information within two years showing ownership of the data and data usage.


The bill will also give broadcasters, newspapers, artists and other content owners the right to bring suit in court against platforms or others who use their content without permission. It would prohibit the use of digital representations of copyrighted works to either train an AI-/algorithm-based system or create synthetic content without the consent of the creators, including compensation.


Sen. Maria Cantwell (D-WA), who introduced the bill with Sens. Martin Heinrich (D-NM) and Marsha Blackburn (R-TN), said the bill will provide much-needed transparency around AI-generated content.


“The COPIED Act will also put creators, including local journalists, artists and musicians, back in control of their content with a provenance and watermark process that I think is very much needed,” Cantwell said.


National Association of Broadcasters President Curtis LeGeyt said the bill will help to protect the authenticity of the vital local and national news that radio and television stations provide.


“Deepfakes pose a significant threat to the integrity of broadcasters’ trusted journalism, especially during an election year when accurate information is paramount,” he said. “We also applaud the prohibition on the use of our news content to train generative AI systems or to create competing content without express consent and compensation to the news creator.”


Newspaper trade groups, as well as the News/Media Alliance, echoed NAB concerns.


The bill also has the support of SAG-AFTRA, which says deepfakes present a “real and present threat to the economic and reputational well-being” to its union members. “We need a fully transparent and accountable supply chain for generative Artificial Intelligence and the content it creates in order to protect everyone’s basic right to control the use of their face, voice and persona,” said Duncan Crabtree-Ireland, National Executive Director of SAG-AFTRA.


Several music industry organizations are also backing the bill, including the Recording Industry Association of America, Recording Academy and National Music Publishers’ Association, among others.


“Protecting the life’s work and legacy of artists has never been more important as AI platforms copy and use recordings scraped off the internet at industrial scale and AI-generated deepfakes keep multiplying at rapid pace,” RIAA Chairman Mitch Glazier said. “Leading tech companies refuse to share basic data about the creation and training of their models as they profit from copying and using unlicensed copyrighted material to generate synthetic recordings that unfairly compete with original works.”


During a Senate Commerce Committee hearing Thursday (July 11), Amba Kak, co-leader of the AI Now Institute, said AI models are trained on large amounts of data, and those models can set in motion some of the “most harmful” and far-reaching data practices.


“We’re also seeing Big Tech firms store and use data collected in one context for other unanticipated purposes, using AI as a catchall justification,” Kak said. “Companies haven’t given clear answers to the question of whether or not they’re using internal data to train new AI models.”

37 views0 comments

Comments


bottom of page