top of page
Writer's pictureInside Audio Marketing

WaitWhat Proposes Podcasters Sign Pledge Committing To Disclose When AI Is Used.


Artificial intelligence is still in its early stages, but longer-term AI is sure to play a larger role in what listeners hear on their favorite podcasts. That has led WaitWhat, the media company behind Masters of Scale podcast, to draft what it is calling a Podcast Listener’s Bill of Rights. It offers a set of guidelines for ethical podcast and audio consumption in the age of AI.


“We are committed to transparency, but those standards are hardly codified in our industry,” WaitWhat CEO Jeff Berman explained in a May 23 episode of Masters of Scale. In the episode, the team unveiled a full vocal clone for founding Masters of Scale host Reid Hoffman. It is a synthetic voice they have dubbed “Reid-ish” that has opened a bigger discussion of AI’s use inside WaitWhat.


“Even for the positive use cases, the deployment of a synthetic voice clone raises important questions around disclosure, whether and when people should know they’re hearing AI-generated audio. This entire behind-the-scenes episode exists because we are committed to transparency, but those standards are hardly codified in our industry,” Berman said. “One of the big questions that we are wrestling with – and hopefully everyone in this industry is wrestling with – is what do the listeners have a right to know?”


For producers that sign the Podcast Listener’s Bill of Rights drafted by WaitWhat, they agree to inform a listener when a when a host’s or guest’s voice has been synthesized or cloned using AI tools — including when it is used for pick-ups to correct errors and stumbles, a promo or ad, or the narration for an entire scripted episode. Those who sign also pledge to tell listeners when any voice heard in the content of a podcast does not come from the human associated with it but has been generated using a text-to-voice or voice-to-voice AI platform. They also pledge to alert listeners when AI is used to alter words, for clarity or accuracy reasons or when a producer wants to clean up a phrase or erroneous word used during a recording. Listeners will also be alerted when a large language model (LLM) such as ChatGPT has been used to generate a significant portion of a podcast script, although it leaves what qualifies as “significant” to the users.


Some examples of disclosures that an episode could use include, “Some of the voices featured in this episode were created and/or modified using AI. We have full permission and consent from all parties involved,” or “The script for this podcast was written by generative AI tools” or “This episode contains vocal audio that does not belong to a specific person, but is entirely generated by AI.”


WaitWhat says the guidelines will naturally change over time as technology and consumers evolve, but it sees its initial Bill of Rights as a way to create a dialogue, promote transparency, and avert pre-emptive government regulation that may stifle future innovation. “Given that there are currently no industry-wide standards for using AI in audio, it is our goal to establish those standards and create an ecosystem in which the public is well-protected,” it says.


To date, 15 podcasters have signed the pledge.

14 views0 comments

Comments


bottom of page