OpenAI is pushing back against a first-of-its-kind bill to regulate the AI industry in California — causing some of the company’s former employees too again question the company’s commitment to safety.
Earlier this week, OpenAI published a letter opposing recent changes made to SB 1047, a new bill circling through Sacramento that seeks to address the risks of AI.
“We join other Al labs, developers, experts and members of California’s Congressional delegation in respectfully opposing SB 1047 and welcome the opportunity to outline some of our key concerns,” the company wrote to California Sen. Scott Wiener, who introduced the bill in February.
Former staffers say the company’s opposition to the legislation is the latest example of the company’s false promises, and are taking aim at CEO Sam Altman.
“Sam Altman, our former boss, has repeatedly called for Al regulation. Now, when actual regulation is on the table, he opposes it,” former OpenAI employees William Saunders and Daniel Kokotajlo wrote in an open letter to California state leaders. “OpenAI opposes even the extremely light-touch requirements in SB 1047, most of which OpenAl claims to voluntarily commit to, raising questions about the strength of those commitments.”
Saunders, who worked at OpenAI for about three years, quit this past February and was one of 13 signatories of a letter published in June demanding action to keep AI companies accountable. Kokotajlo, who worked on the governance team, quit in April after losing confidence that the company would “behave responsibly” as it works to develop artificial general intelligence, a still theoretical form of AI that has humanlike reasoning capacity.
The authors’ chief complaint against OpenAI is that it’s restraining whistleblower protections.
In its current state, SB 1047 restricts developers of select AI models “from preventing an employee from disclosing information, or retaliating against an employee for disclosing information, to the Attorney General or Labor Commissioner if the employee has reasonable cause to believe the information indicates the developer is out of compliance with certain requirements or that the covered model poses an unreasonable risk of critical harm.”
The authors said that OpenAI, meanwhile, “demanded we sign away our rights to ever criticize the company.” They said the existing federal legislation OpenAI is using to support its case is “woefully inadequate.”
They said they hoped their letter would help push the California legislature to pass SB 1047. “With appropriate regulation, we hope OpenAl may yet live up to its mission statement of building AGI safely,” they wrote.
In an email to Business Insider, a spokesperson for OpenAI said the letter misrepresented the company’s position on the bill.
“We strongly disagree with the mischaracterization of our position on SB 1047. As we noted in our letter to Senator Wiener, we support SB 1047’s focus on the safe development and deployment of AI, but firmly believe frontier AI safety regulations should be implemented at the federal level because of their implications for national security and competitiveness— that is why we’ve endorsed legislation in Congress and are committed to working with the U.S. AI Safety Institute.”
They added, “We appreciate the whistleblower provisions in the bill, which are in line with the protections we already have in place under OpenAI’s Raising Concerns policy.”
Read the full article here