Skip to content Skip to footer

Whistleblowers Urge SEC to Examine OpenAI’s Allegedly Illegal Non-Disclosure Practices

Sparking concerns over transparent practices in the AI sector, whistleblowers have called upon the SEC to delve into OpenAI’s allegedly illegal non-disclosure deals.

Short Summary:

  • Whistleblowers filed a complaint against OpenAI for restrictive NDAs.
  • SEC urged to investigate and possibly penalize OpenAI.
  • Concerns over inadequate whistleblower protections in the AI industry.

Whistleblowing can be a risky endeavor, especially when it involves tech giants like OpenAI. A recent complaint filed with the U.S. Securities and Exchange Commission (SEC) has shaken the artificial intelligence community, alleging that OpenAI’s non-disclosure agreements (NDAs) impose burdensome restrictions on employees. According to a letter revealed by Reuters and provided by the office of Senator Chuck Grassley, these agreements might be limiting whistleblowers’ ability to raise concerns with federal authorities.

The letter stresses that OpenAI’s NDAs could have required employees to forfeit their rights to whistleblower compensation and imposed penalties on those aiming to report issues to regulators. An excerpt from the letter reads:

“Given the well-documented potential risks posed by the irresponsible deployment of AI, we urge the Commissioners to immediately approve an investigation into OpenAI’s prior NDAs and to review current efforts apparently being undertaken by the company to ensure full compliance with SEC rules.”

The authors of this letter, who remain anonymous, have made a strong case for SEC intervention. They are urging the agency to inspect every contract containing a non-disclosure clause, which may encompass employment, severance, and investor agreements. This plea is underpinned by allegations that OpenAI’s agreements have crippled the ability of employees to air grievances regarding potential securities violations.

Senator Grassley has reiterated the critical nature of this issue, stating:

“Artificial intelligence is rapidly and dramatically altering the landscape of technology as we know it. OpenAI’s policies and practices appear to cast a chilling effect on whistleblowers’ right to speak up and receive due compensation for their protected disclosures.”

The latest complaint coincides with internal turmoil at OpenAI, highlighted by the resignation of high-profile employees like William Saunders, who left the company in February. The phenomenon of employees departing companies over disagreements with internal policies is not isolated. For instance, OpenAI’s chief scientist Ilya Sutskever and senior safety researcher Jan Leike have also exited in recent months, citing safety culture concerns.

Saunders, who spoke to TIME, underscored the peril of speaking out against the company:

“By speaking to you, I might never be able to access vested equity worth millions of dollars. But I think it’s more important to have a public dialogue about what is happening at these AGI companies.”

This perspective resonates with many former employees who believe the stakes are incredibly high. In an open letter, 13 current and former employees of OpenAI and Google DeepMind echoed Saunders’s sentiments, advocating for stronger whistleblower protections. They argue that the rapid advancement in AI technology necessitates robust channels for expressing safety concerns.

The whistleblowers have also pointed out that OpenAI required employees to get prior consent from the company to disclose information to federal regulators. This brings into question the true extent of transparency and accountability within the company. AI industry whistleblowers, unlike those in regulated sectors like finance, lack specific legal protections. Daniel Kokotajlo, a former OpenAI employee, emphasized this point:

“Preexisting whistleblower protections don’t apply here because this industry is not regulated, so there are no rules about a lot of the potentially dangerous stuff companies could be doing.”

In response to the growing clamor for transparency, OpenAI has taken some steps. The company formed a Safety and Security Committee to oversee its AI models’ safety, guided by executives, including CEO Sam Altman. However, critics remain skeptical about the effectiveness of such internal oversight structures. As Saunders puts it:

“Accountability requires that if some organization does something wrong, information about that can be shared. And right now, that is not the case.”

The former employees have not only criticized non-disparagement agreements but also highlighted the broader issue of confidentiality agreements that could limit public discourse. Notably, OpenAI spokeswoman Lindsey Held provided a guardedly positive spin on the matter in the New York Times:

“We’re proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society, and other communities around the world.”

This public sentiment marks a significant departure from previous policies where strict confidentiality was imposed even on employees voicing concerns about safety. The calls for transparency have had ripple effects, even influencing major stakeholders in the broader tech industry.

Notably, Trillium Asset Management, a major shareholder in Alphabet (Google’s parent company), recently filed a shareholder resolution pushing for enhanced whistleblower protections. Their resolution argues that whistleblower safeguards are not only ethical but also beneficial for business.

Trillium’s chief advocacy officer, Jonas Kron, is resolute in his stand:

“Whistleblowers protect investors, not management. You naturally expect management not to be supportive of whistleblower protections because it’s not in their narrow personal interest. Whistleblowers are always an embarrassment to management and always a way for investors to protect the long term value of the company.”

Reflecting on historical events, such as the controversial firings of Dr. Margaret Mitchell and Dr. Timnit Gebru from Google’s Ethical AI team, it becomes evident that whistleblower protections are far from adequate in the tech industry. These high-profile exits have reignited the debate over the moral responsibilities of tech giants developing influential technologies.

Based on Trillium’s observations, whistleblowing mechanisms correlate positively with fewer government fines and material lawsuits, suggesting that transparency and accountability directly benefit businesses. This is a crucial point for AI startups and established giants alike, which need to build trust not only with their employees but also with the public.

The broader takeaway from these developments is clear: as artificial intelligence continues to expand its footprint across various sectors, the call for ethical practices becomes louder. Proper oversight and robust whistleblower protections aren’t just moral imperatives; they are strategic necessities for sustainable growth.

These issues aren’t isolated within OpenAI or Alphabet. They represent a significant challenge for the tech industry at large—a challenge that is likely to shape the future of AI-driven advancements. As a reader interested in AI, you can explore [AI Ethics](https://autoblogging.ai/category/knowledge-base/artificial-intelligence-for-writing/ethics/), the [Future of AI Writing](https://autoblogging.ai/category/knowledge-base/artificial-intelligence-for-writing/future/), and more on [Autoblogging.ai](https://autoblogging.ai).

In closing, the unfolding saga at OpenAI underscores the importance of transparent practices and protective measures for whistleblowers. The tech community, regulators, and the public must remain vigilant to ensure that developments in artificial intelligence proceed responsibly, with an unwavering commitment to ethical conduct.