California Adopts New Law Targeting AI Misuse In Ads.
- Inside Audio Marketing

- Oct 15
- 3 min read

Even before the federal government shutdown, there was little progress on legislation that would protect the voices, image and likeness of someone from being used in the creation of advertisements or other goods and services without their consent. In that gap, states continue to fill the void. California has become the latest, as Gov. Gavin Newsom has signed a bill that would give someone whose voice or image is used without their permission the ability to take legal action for damages. The bill would also allow them to seek an injunction or temporary restraining order, which, if approved by a court, would require the respondent to comply with the order within two business days. Supporters say that will allow courts to order rapid takedowns while a case proceeds, so victims aren’t left exposed to ongoing harm.
Silicon Valley Sen. Dave Cortese (D) sponsored the bill (SB 683). He has said it is designed to strengthen existing privacy protections and address and prevent exploitation, particularly in situations where individuals' personal information or images are used without their consent.
The legislation would cover the use of voice, image, or name in connection with any news, public affairs, or sports broadcast or any political campaign. It also explicitly carves out broadcasters from any potential legal liability. The bill says the ad-supported mediums, including radio, television, newspapers, magazines, billboards and transit ads, where the ads are placed, will not face any potential legal liability — unless it is established that those owners or employees had knowledge of the unauthorized use of the person’s name, voice, signature, photograph, or likeness in the advertisement.
SAG-AFTRA is among the groups that have advocated for the protections, with the union’s staff and legal experts helping to draft, promote and lobby for its passage.
“SB 683 will help remove unlawful content quickly, protecting the individuals whose voice or likeness is being misappropriated and the consumers who may be targeted by fraud,” said National Executive Director & Chief Negotiator Duncan Crabtree-Ireland.
SAG‑AFTRA says it builds on last year’s momentum when California adopted two laws to curb the use of AI-generated voices and faces. That included restrictions in Bill AB 2355 on the use of deepfakes in campaign ads, which made California the first state to include AI under its campaign transparency rules, mandates the use of disclaimers when AI is used in political ads — including those on radio. And the Bill AB 1836 also prohibited commercial use of digital replicas of deceased performers in films, TV shows, video games, audiobooks, sound recordings and more, without first obtaining the consent of those performers’ estates.
“This is a clear statement that California stands with performers against AI abuse,” said SAG-AFTRA President Sean Astin.
Several other states have also put limits on AI’s use in political ads on radio and other media, including New York, New Mexico and Michigan. In July, Pennsylvania adopted new protections against deepfakes.
The National Association of Broadcasters says it doesn’t oppose the state laws. However, the group has told Inside Radio that radio and TV stations should not be dragged into an enforcement role. A spokeswoman said that forcing stations to pre-screen programming or place AI disclosures on ads or other content would be a “near-impossible task.” NAB instead believes the duty should fall on the content’s creator or sponsor.




Comments