Newsletter Wednesday, October 9
  • Anthropic has emerged as one of the primary contenders in the AI arms race.
  • The company was founded by a group of former OpenAI employees.
  • It’s sought to position itself as the safer, more responsible AI company.

Founded by a group of former OpenAI employees, Anthropic has emerged as a serious contender in the race for AI dominance. The flagship product is Claude, an AI chatbot that rivals OpenAI’s ChatGPT.

Anthropic has sought to position itself as the safer, more stable alternative to OpenAI, and a company that risk-averse Fortune 500 companies can feel comfortable doing business with. That differentiation has only become more relevant as OpenAI continues to be mired in chaos and infighting.

Anthropic has raised over $7 billion over the past year from tech giants like Google, Amazon, and Salesforce and is reportedly raising a new round of funding that could value the company at as much as $40 billion, according to The Information.

Business Insider has put together a list of the most important people running Anthropic right now. The list is by no means exhaustive, but it includes thought leaders and researchers who are contributing knowledge to the understanding of machine learning, as well as the business people working to build Anthropic into an AI behemoth.

Dario Amodei, Co-Founder and CEO

Before leaving to start Anthropic, Amodei served as vice president of research at OpenAI where he led the development of breakthrough large language models like GPT-2 and GPT-3. His foundational research on reinforcement learning and scaling laws helped revolutionize the AI industry. Since its founding, Amodei has helped establish Anthropic as the safer, more business-friendly alternative to his former employer, OpenAI.

Dario holds a Ph.D. in biophysics and computational neuroscience from Princeton University, where he was a Hertz Fellow. He completed a postdoctoral fellowship at the Stanford University School of Medicine, focusing on applying machine learning to biomedical data.

Daniela Amodei, Co-Founder and President

After her career in international development, Amodei was in charge of risk management and compliance at Stripe before serving as vice president of safety and policy at OpenAI. She’s been in the trenches working to create sager, steerable, and reliable AI since the early days of this nascent industry.

She, along with her brother Dario, was part of the team that left OpenAI with the goal of creating a more responsible AI company. Today, she oversees Anthropic’s day-to-day operations and has been instrumental in shaping the company’s defining ethos around safety and security.

Jack Clark, Co-Founder and Head of Policy

Clark made a name for himself as a journalist at Bloomberg, where he was quite possibly the world’s first and, for a while only, neural networks reporter.

He would go on to serve as policy director at OpenAI and now leads a team at Anthropic that produces research into AI bias and persuasion and conducts critical red-teaming that pushes the boundaries of understanding around AI-related risk. He has proven himself to be one of AI’s most eloquent thought leaders, and his newsletter, Import AI, has become required reading for anyone interested in tackling big questions in machine learning.

Jared Kaplan, Co-Founder and Chief Science Officer

Kaplan, who boasts a PhD from Harvard, spent 15 years as a theoretical physicist in academia and continues to lecture on deep learning at Johns Hopkins University. While at OpenAI, he helped create GPT-3 and Codex, and his revolutionary work on AI scaling laws has been foundational for the entire AI industry.

Kaplan helped pioneer Constitutional AI, an approach championed by Anthropic that aims to create AI systems that are constrained by and aligned with a set of predetermined principles.

Chris Olah, Co-Founder and Research Lead, Mechanistic Interpretability

Olah is a researcher in the field of mechanistic interpretability, which is essentially the science of translating neural networks into algorithms that are useful to humans.  

Olah’s research focuses on mapping out the artificial “neurons” that are part of AI models. His team achieved a significant milestone earlier this year by successfully identifying distinct neuron clusters within Anthropic’s most advanced models that are associated with specific concepts and tasks, such as detecting bias or identifying fraudulent emails.

By selectively activating or deactivating these neuron groups, the team demonstrated the ability to modulate the model’s behavior.

Sam McCandlish, Co-Founder and Chief Technology Officer

McCandlish oversees Anthropic’s Large Language Model (LLM) Organization, which encompasses the Training, Inference, and Core Resources divisions. He also serves as the company’s Responsible Scaling Officer, helping design and implement Anthropic’s responsible scaling policy.

McCandlish was a former research lead at OpenAI, where he spearheaded AI Safety initiatives and pioneered scaling laws that influenced technological breakthroughs like GPT-3. He holds a Ph.D. in Theoretical Physics from Stanford University, specializing in quantum gravity and tensor networks, and an MS in Physics from Brandeis University, where he explored active matter to quantum field theory.

Tom Brown, Co-Founder and Head of Core Resources

Brown was part of the technical staff at OpenAI and Google DeepMind, and, like Anthropic’s other founders, his research, particularly his work on how computational power influences model performance, has helped lay the groundwork for the entire AI industry.

At Anthropic, Brown is focused on the compute side of LLM development, overseeing the company’s use of scarce, valuable GPUs and leading its strategic partnerships with cloud and chip providers. Brown is also an alum of the legendary tech incubator Y Combinator.

Krishna Rao, Chief Financial Officer

As any emerging AI company can attest, conducting groundbreaking research isn’t enough. Anthropic, like most of its competitors, is also tackling the challenge of building a viable, profitable global business. Rao, who was hired in May as the company’s first CFO, is helping to formulate Anthropic’s financial strategy during a period of explosive growth.

He cut his teeth working at private equity giant Blackstone and management consultant Bain & Company before becoming head of business development at Airbnb. In that role, he helped the company raise billions in private capital and steered the company through its high-profile IPO.

Mike Krieger, Chief Product Officer

A native of São Paulo, Krieger joined Anthropic as Chief Product Officer in May 2024. He currently leads the global product team, collaborating closely with customers across industries to shape the product vision and strategy for Anthropic’s flagship model, Claude.

In this role, he’s building deep relationships with users and helping to turn Anthropic’s research into a mass-market product.

He is a cofounder of Instagram and served as its CTO, where he grew the engineering organization to more than 450 employees across the company’s offices in Menlo Park, San Francisco, and New York.

Brian Israel, General Counsel

As generative AI opens up new avenues for technological innovation, it’s also raising fraught questions about authenticity, creativity, and some of the core assumptions of our legal system.

Like many of its competitors, Anthropic is deeply enmeshed in these questions, whether that’s helping to guide the creation of new legal frameworks or facing lawsuits that could determine to future of the nascent AI industry. Israel is the point person helping Anthropic navigate this legal landscape. Before joining Anthropic, Brian spent nearly a decade in the US government, from the State Department to NASA, working to address the legal challenges created by emerging technologies.

Sam Bowman, Alignment

Bowman co-leads Anthropic’s alignment team along with Jan Leike, a safety researcher who recently left OpenAI. The team is responsible for confronting some of the thorniest questions associated with the development of advanced AI models.

It works on forecasting when AI models will develop potentially dangerous abilities like deception or scheming and how humans will be able to tell when they do. The alignment team is also developing evaluations that can flag when a model reaches a concerning level of skill and reliable techniques for controlling them when they get out of line. These capabilities are essential to Anthropic’s positioning of itself as the safer AI company. 



Read the full article here

Share.
Leave A Reply