Newsletter Sunday, November 17

A former top safety executive at OpenAI is laying it all out.

On Tuesday night, Jan Leike, a leader on the artificial intelligence company’s superalignment group, announced he was quitting with a blunt post on X: “I resigned.”

Now, three days later, Leike shared more about his exit — and said OpenAI isn’t taking safety seriously enough.

In his posts, Leike said he joined OpenAI because he thought it would be the best place in the world to research how to “steer and control” artificial general intelligence (AGI), the kind of AI that can think faster than a human.

“However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” Leike wrote.

The former OpenAI exec said the company should be keeping most of its attention on issues of “security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.”

But Leike said his team — which was working on how to align AI systems with what’s best for humanity — was “sailing against the wind” at OpenAI.

“We are long overdue in getting incredibly serious about the implications of AGI,” he wrote, adding that, “OpenAI must become a safety-first AGI company.”

Leike capped off his thread with a note to OpenAI employees, encouraging them to shift the company’s safety culture.

“I am counting on you. The world is counting on you,” he said.

Resignations at OpenAI

Both Leike and Ilya Sutskever, the other superalignment team leader, left OpenAI on Tuesday within hours of each other.

In a statement on X, Altman praised Sutskever as “easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend.”

“OpenAI would not be what it is without him,” Altman wrote. “Although he has something personally meaningful he is going to go work on, I am forever grateful for what he did here and committed to finishing the mission we started together.”

Altman didn’t comment on Leike’s resignation.

On Friday, Wired reported that OpenAI had disbanded the pair’s AI risk team. Researchers who were investigating the dangers of AI going rogue will now be absorbed into other parts of the company, according to Wired.

OpenAI didn’t respond to requests for comment from Business Insider.

The AI company — which recently debuted a new large language model GPT-4o — has been rocked by high-profile shakeups in the last few weeks.

In addition to Leike and Sutskever’s departure, Diane Yoon, vice president of people, and Chris Clark, the head of nonprofit and strategic initiatives, have left, according to The Information. And last week, BI reported that two other researchers working on safety quit the company.

One of those researchers later wrote that he had lost confidence that OpenAI would “behave responsibly around the time of AGI.”

Read the full article here

Share.
Leave A Reply

Exit mobile version