OpenAI Faces Backlash Over ChatGPT Voice Resembling Scarlett Johansson

OpenAI is facing significant scrutiny after the release of its new AI model, GPT-4o, which features a voice assistant named “Sky” that bears a striking resemblance to actress Scarlett Johansson. The controversy erupted after Johansson publicly objected to the similarity, prompting OpenAI to pause the use of the Sky voice while addressing the issue.

The issue began when OpenAI debuted its latest AI capabilities, including a set of five distinct voices for ChatGPT, during a live demonstration. One of these voices, Sky, immediately drew comparisons to Johansson’s portrayal of an AI assistant in the 2013 film “Her.” Users and media quickly noted the uncanny resemblance, leading to widespread speculation about whether Johansson’s voice had been used without her consent.

OpenAI’s Chief Technology Officer, Mira Murati, and CEO Sam Altman have both denied that Sky’s voice was designed to mimic Johansson. In a blog post, OpenAI stated, “Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice.” The company emphasized that the voices were chosen from over 400 submissions through a rigorous casting process, aiming to create diverse and natural-sounding AI voices.

Scarlett Johansson’s involvement adds a significant layer to the controversy. According to Johansson, Altman approached her twice in the past year, offering her the opportunity to voice the AI system. Johansson declined both offers for personal reasons. Despite her refusals, Johansson claims that OpenAI proceeded with a voice remarkably similar to hers.

Johansson expressed her frustration and disbelief in a public statement, highlighting that her family and friends couldn’t distinguish Sky’s voice from her own. She further criticized Altman for seemingly acknowledging the similarity with his cryptic “her” tweet, referencing her role in “Her.”

In response to the situation, Johansson has hired legal counsel and demanded transparency from OpenAI regarding the creation process of Sky’s voice. She emphasized the broader implications of this issue, particularly in the context of deepfakes and the protection of personal likenesses and identities.

OpenAI has maintained that any resemblance to Johansson’s voice is coincidental and unintentional. The company reiterated that the voices, including Sky, were developed by professional actors and were not intended to mimic any celebrity. OpenAI’s selection process involved significant input from casting and directing professionals to ensure authenticity and diversity.

Despite these assurances, the backlash has been substantial. Critics argue that the incident raises important ethical questions about the use of AI-generated voices and the potential for misuse of personal likenesses. The situation has also reignited discussions about the need for clearer regulations and protections in the rapidly evolving field of artificial intelligence.

This controversy comes at a critical time for OpenAI, which has been under pressure to lead the generative AI market while balancing ethical considerations and regulatory compliance. The company’s rapid advancements and ambitious projects have positioned it at the forefront of AI innovation, but also under intense scrutiny from both industry peers and the public.

The fallout from the Sky voice issue has highlighted the challenges that AI companies face in navigating the complex landscape of intellectual property, consent, and ethical use. Johansson’s case is particularly notable given her high profile and previous legal battles over similar issues, such as her lawsuit against Disney over the release strategy of “Black Widow.”

OpenAI’s handling of the situation and its subsequent actions will likely influence broader industry practices and regulatory approaches. As AI technologies become more integrated into everyday life, ensuring that these systems are developed and deployed responsibly will be crucial.