Skip to content Skip to footer

Scarlett Johansson turns down OpenAI role, cites ‘strange’ feelings for her children and personal values

Scarlett Johansson has declined an offer from OpenAI to voice its chatbot, citing personal values and concerns for her children. The actress expressed her discomfort with the evolving tone of AI technology and its implications for her family’s future.

Short Summary:

  • Scarlett Johansson rejected OpenAI’s offer to voice its chatbot due to personal concerns.
  • She raised alarms about AI technology’s implications, especially deepfakes.
  • OpenAI CEO Sam Altman clarified the company’s intentions behind the chatbot’s voice.

Scarlett Johansson, the acclaimed actress known for her role as Samantha in the 2013 film Her, recently made headlines when she announced her decision to turn down an offer from OpenAI. The offer involved voicing the company’s latest AI chatbot, but Johansson had her reservations, notably concerning its appropriateness for her children.

“I felt I did not want to be at the forefront of that,” Johansson stated in an interview with The New York Times. “I just felt it went against my core values.”

This story gained traction when Johansson publicly expressed her outrage over OpenAI’s new voice, dubbed “Sky,” which bore a striking resemblance to her natural voice. She described her feelings as a mix of anger and disbelief when she first heard the voice demo. Addressing the matter, she said:

“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine.”

OpenAI’s CEO, Sam Altman, was keen to clarify that the voice used for Sky was not intended to mimic Johansson’s voice. He stated:

“The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers… Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products.”

This incident raises a broader concern regarding the implications of artificial intelligence, especially as it interacts with the realm of human identity and personal relationships. Johansson is particularly worried about the ramifications of advanced technologies like deepfakes, which she warned represent a “dark wormhole you can never climb your way out of.”

Deepfake Technology: A Threat to Individual Autonomy

Johansson’s apprehensions are rooted in the fears many share about deepfake technology. She indicated how the digital age has rendered personal security extraordinarily precarious, particularly regarding the misuse of images and likenesses for malicious purposes:

“If your ex-partner is putting out revenge, deepfake porn, your whole life can be completely ruined.”

Her comments underscore a critical issue faced by the entertainment industry and society alike—how to navigate the rapidly evolving landscape of technology that can easily misrepresent individuals. This is part of the broader discussion about AI Ethics and how they reflect our responsibilities as creators and users of technology.

Johansson’s Personal Sentiments

It’s significant to note that Johansson’s decision was also driven by a deeply personal rationale. As a mother, she is particularly cognizant of how her career choices might impact her children:

“I also felt for my children it would be strange. I try to be mindful of them.”

This sentiment resonates with many parents in the industry who constantly grapple with the dual demands of career ambitions and family values. Johansson holds a daughter, Rose, with her ex-husband Romain Dauriac, and a son, Cosmo, with her current husband Colin Jost.

Legal and Regulatory Concerns

The incident has not just sparked a conversation around individual comfort with AI but also about the legal frameworks that should be in place to protect public figures and their likenesses. Johansson’s team has sought clarity from OpenAI on the development of the Sky voice:

“I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”

As AI technologies proliferate, the need for policies that govern their development and use becomes critical. Johansson’s public stance is not merely a personal grievance; it is reflective of a larger industrial concern. Many in the entertainment sectors are now advocating for regulatory frameworks that explicitly include AI-related issues, including voice replication and likeness rights.

OpenAI’s Position

In the wake of Johansson’s outcry, OpenAI seems to have pivoted its approach to the use of AI-generated voices. Initially, they promoted the Sky voice vigorously. However, following the wave of backlash, they paused its use:

“Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”

Altman maintained that the voice was created from a different actress, emphasizing the complexities involved in voice generation and AI. He mentioned they aimed to create a comforting experience for users, who may be uneasy regarding AI technology. The company’s intent was not to imitate but rather to explore innovative applications of AI in communications.

Implications for the Future

Experts are expressing concern over how human-like voices in AI technologies may affect individual-user relationships. Communication by voice fosters intense connections, which might blur the lines between human interaction and artificial engagement:

“Communication by voice is really intimate, really impactful. It allows the AI to express subtleties.”

The nuances of how AI technologies participate in our lives, particularly regarding emotional responses, highlight the need for deeper discussions about their societal implications. It mirrors themes presented in films like Her, where human emotions intertwine with AI in unexpected ways.

Conclusion: A Call for Caution

Johansson’s refusal to engage with OpenAI’s chatbot initiative serves as a pivotal moment in the dialogue surrounding celebrity involvement with AI technologies. Her transparent refusal driven by personal concerns is a reminder that the intersection of AI and personal identity is fraught with ethical and emotional complexities. As the future of AI writing evolves, we must prioritize privacy, security, and ethical considerations in our technological advancements. In a landscape where AI can rapidly evolve and replicate, it’s imperative to listen to voices that caution us toward responsible usage and regulation.