OpenAI, the maker of ChatGPT, is facing criticism as it plans to transition from a nonprofit charity to a for-profit business. Former employees of the company are now urging California and Delaware state officials to block this change, fearing that it could compromise the company’s mission to create artificial intelligence (AI) that benefits humanity.
Why Former Employees Are Concerned
A group of ex-OpenAI workers, led by Page Hedley, a former policy and ethics adviser, have raised concerns about the shift. They fear that, if OpenAI succeeds in creating AI that surpasses human capabilities, the lack of accountability to the public could lead to harmful consequences.
Hedley explained, “I’m worried about who owns and controls this technology once it’s created.” His fears are shared by many in the AI community who worry that OpenAI might prioritize profits over safety and ethical considerations.
Nobel Prize Winners Back the Petition
The petition, which was sent to the attorneys general of California and Delaware, has garnered support from several prominent figures. Among them are three Nobel Prize winners, including economists Oliver Hart and Joseph Stiglitz, and AI pioneers like Geoffrey Hinton and Stuart Russell. These experts are joining the former workers in asking officials to protect OpenAI’s original mission of safeguarding AI technology.
OpenAI’s Response to the Petition
In response to the petition, OpenAI has emphasized that any changes to its structure are intended to ensure that AI benefits the broader public. The company has proposed creating a public-benefit corporation, similar to other AI labs like Anthropic and Elon Musk’s xAI, while maintaining a nonprofit branch.
“We believe that as our for-profit branch grows, so will the nonprofit, allowing us to continue our mission,” OpenAI stated.
The Legal and Ethical Debate
This petition is not the first attempt to prevent OpenAI’s restructuring. Earlier this month, labor leaders and nonprofits raised concerns about protecting OpenAI’s charitable assets. California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings are being asked to review the situation to ensure the public’s interests are upheld.
OpenAI’s transition from a nonprofit research lab, founded with the goal of creating AI that benefits humanity, to a for-profit company has sparked controversy. The company now reports a market value of $300 billion, with ChatGPT reaching 400 million weekly users.
Elon Musk’s Lawsuit and Concerns About Governance
The move towards a for-profit structure also has legal implications. Elon Musk, a co-founder of OpenAI, has filed a lawsuit against the company, accusing it of betraying the principles that led him to invest in the nonprofit. While some petition signatories support Musk’s legal action, others are skeptical, given Musk’s own AI ventures.
Risks of a For-Profit AI Company
Many critics, including former OpenAI engineers, are particularly concerned about the loss of safeguards if the company transitions to a for-profit entity. One key provision in OpenAI’s nonprofit charter is the “stop-and-assist clause,” which requires the company to intervene if another organization is close to creating more advanced AI. This clause could disappear if OpenAI becomes a for-profit company, raising concerns about the potential for reckless AI development.
The Potential Dangers of Unchecked AI Development
Nisan Stiennon, a former AI engineer at OpenAI, warned, “OpenAI may one day build technology that could get us all killed.” He argued that the nonprofit structure was critical to ensuring that AI development was controlled and aligned with the public good, rather than driven by profit.
OpenAI’s Future: A Balancing Act?
The debate over OpenAI’s future is not just about the shift in its corporate structure. It is about how to balance technological advancement with ethical responsibility. As AI continues to evolve, many believe that OpenAI must remain accountable to the public, prioritizing safety and long-term benefits over short-term profits.
With both internal and external pressures mounting, the outcome of this legal and ethical battle could set a significant precedent for the future of AI development and governance.