Newsletter Tuesday, November 5

Besides making pithy exit announcements on X, they haven’t said much about why they’re worried about OpenAI’s approach to development — or the future of AI.

That is, until earlier this week, when Leopold Aschenbrenner, a researcher fired from OpenAI in April, published his thoughts on the AI revolution in an epic 165-page treatise.

Aschenbrenner worked on OpenAI’s super alignment team which was tasked with mitigated AI risks. Aschenbrenner said OpenAI fired him for leaking information about the company’s readiness for artificial general intelligence.

For his part, Aschenbrenner said the information he shared was “totally normal” and that OpenAI may have just been looking for a reason to fire him. He was one of several employees who refused to sign a letter calling for CEO Sam Altman’s return after the board briefly ousted Altman last year.

Aschenbrenner’s essay doesn’t appear to include sensitive details about OpenAI. Instead, as Aschenbrenner writes on the dedication page, it’s based on “publicly available information, my own ideas, general field knowledge, or SF gossip.”

Business Insider uploaded the tract to OpenAI’s GPT-4 model. We asked it to summarize the work and relay the most significant takeaways. We also gently requested that the bot keep its summary clear and concise.

Here’s ChatGPT’s 57-word summary of Aschenbrenner’s essay

ChatGPT distilled the essay into 57 words that outlined Aschenbrenner’s views on the evolution of AI.

“Leopold Aschenbrenner’s essay discusses the transformative potential of artificial general intelligence (AGI) and superintelligence, and forecasts significant advancements in AI technology in the near future. Aschenbrenner, formerly associated with OpenAI, outlines how the progression from current AI models like GPT-4 to AGI could occur much faster than anticipated, based on trends in computational power and algorithmic efficiency.”

ChatGPT then offered four key takeaways from Aschenbrenner’s essay

Rapid Progress in AI

“Aschenbrenner argues that AI development is accelerating at an unprecedented rate. He predicts that by 2027, AI models could reach the capabilities of human AI researchers and engineers, potentially leading to an intelligence explosion where AI surpasses human intelligence.”

Economic and Security Implications

“The essay highlights the immense economic and security implications of these advancements. Aschenbrenner points out that trillions of dollars are being invested into developing the infrastructure needed to support these AI systems, such as GPUs, data centers, and power generation. He also emphasizes the critical need for securing these technologies to prevent misuse, particularly by state actors like the CCP.”

Technical and Ethical Challenges

“Technical and Ethical Challenges: The essay discusses the significant challenges in controlling AI systems smarter than humans, referring to this as the “superalignment” problem. Managing this will be crucial to prevent catastrophic outcomes.”

Predictions and Societal Impact

“Aschenbrenner suggests that few people truly understand the scale of change that AI is about to bring. He discusses the potential for AI to reshape industries, enhance national security, and pose new ethical and governance challenges.”

There’s more.

As Aschenbrenner wrote, there are probably only a few hundred people, most of them in AI labs in San Francisco, who have “situational awareness” about the industry. These people will likely have the best insight into how AI will reshape the years ahead.

Aschenbrenner, as one of them, seems to believe AI will assume a critical role in national security.

We followed up our original prompt by asking ChatGPT to summarize the predictions Aschenbrenner made in his essay.

Here’s how ChatGPT broke it down.

  1. AGI by 2027: “Aschenbrenner predicts that artificial general intelligence (AGI) is strikingly plausible by 2027, highlighting the rapid progress from GPT-2 to GPT-4, which saw AI models advancing from preschool-level to smart high-schooler abilities in just four years. He expects a similar leap in the next few years based on consistent improvements in compute and algorithmic efficiency.”
  2. Superintelligence Following AGI: “Post-AGI, the author anticipates an “intelligence explosion,” where AI rapidly advances from human-level to superhuman capabilities. This transition is expected to be fueled by AI’s ability to automate and accelerate its own research and development.”
  3. Trillion-Dollar AI Clusters: “Economically, Aschenbrenner suggests that the AI sector will see an increase in investment into trillion-dollar compute clusters as corporations and governments prepare for the implications of AGI and superintelligence.”
  4. National and Global Security Dynamics: “There are predictions about intense national security measures being enacted to manage and control AI developments. The competition, particularly with the Chinese government, could intensify, possibly leading to an ‘all-out war’ if not managed properly.”
  5. Superalignment Challenges: “One of the most critical predictions is the struggle with ‘superalignment’ — the challenge of keeping superintelligent AI aligned with human values and interests. This problem is anticipated to be one of the central hurdles as AI reaches and surpasses human intelligence levels.”
  6. Societal and Economic Transformations: “Aschenbrenner expects AI to have a profound impact on society and the economy, potentially leading to a restructuring of industries and the job market due to AI’s capability to perform tasks currently managed by humans.”
  7. The Project Involvement by the US Government: “Predictions include the US government becoming significantly involved in AI development by around 2027-2028 through a dedicated AGI project, likely due to the strategic importance of AI technology.”
  8. Technological Mobilization: “He anticipates a mobilization of technological and industrial resources similar to historical wartime efforts, focusing on AI and its supporting infrastructure as a priority for national policy.”

Here’s the full prompt we used:

Role: Act as an editor.
Task: Read the attached essay about former OpenAI researcher Leopold Aschenbrenner.
Instructions:
  • Summarize the essay, highlighting the most significant takeaways.
  • Focus on key contributions, research areas, and any notable impacts on the field.
  • Ensure the summary is concise yet comprehensive, providing a clear understanding of Aschenbrenner’s work and influence.

Read the full article here

Share.
Leave A Reply