Newsletter Friday, September 20

Podcaster Lex Fridman said in a recent episode that most AI engineers he speaks with estimate between a one and 20% chance that artificial general intelligence will eventually kill off humans.

The prediction varies depending on how you ask. For example, a recent study conducted with 2,700 AI researchers indicated there’s only a 5% chance that AI will lead to human extinction.

But Fridman said it’s important to talk to people who estimate a much higher likelihood AI could wipe us out — like AI researcher Roman Yampolskiy, who told the podcaster in an interview released Sunday that he pegs it as 99.9% within the next hundred years.

The AI researcher teaches computer science at the University of Louisville and just came out with a book called “AI: Unexplainable, Unpredictable, Uncontrollable.”

He discussed the risks of AI for over two hours on Fridman’s podcast — and his predictions were pretty bleak.

He said the chances that AI will wipe out humanity depend on whether humans can create highly complex software with zero bugs in the next 100 years. Yampolskiy said he finds that unlikely since no AI model has been completely safe from people attempting to get the AI to do something it wasn’t designed to do.

“They already have made mistakes,” Yampolskiy said. “We had accidents, they’ve been jailbroken. I don’t think there is a single large language model today, which no one was successful at making do something developers didn’t intend it to do.”

The first few versions of AI models in the last two years have raised various red flags for potential misuse or misinformation. Deepfakes have created fake pornographic images of female public figures and threatened to influence elections with AI robocalls imitating President Biden.

Google AI Overviews, based on Google’s Gemini AI model, is the latest product rollout that didn’t stick the landing. The new feature on Google Search was meant to provide quick informative overviews for certain inquiries presented at the top of search results. Instead, it went viral for coming up with nonsense answers, like suggesting making pizza with glue or stating that no countries in Africa started with the letter K.

Yampolskiy said in order to control AI, there needs to be a perpetual safety machine. Yampolskiy said even if we do a good job with the next few versions of GPT, AI will continue to improve, learn, self-modify, and interact with different players — and with existential risks, “you only get one chance.”

The CEO of ChatGPT developer OpenAI, Sam Altman, has suggested a “regulatory sandbox” where people experiment with AI and regulate it based on what “went really wrong” and what went “really right.”

Altman once warned — or maybe he was joking — back in 2015 that “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

More recently, Altman has said that what keeps him up at night is “all of the sci-fi stuff” related to AI, including the things that are “easy to imagine where things really go wrong.”

Since ChatGPT took the world by storm in November 2022, various predictions have been made about how AI could lead to humanity’s downfall in regard to AI.

But Yampolskiy also cautioned that “we cannot predict what a smarter system will do.” He compared humans to squirrels in the context of AGI, or artificial general intelligence, and said AI will come up with something that we don’t even know exists yet.

According to Yampolskiy, however, there are three realms of outcomes that he predicts. One risk is that everyone will die, another is that everyone will suffer and wish they were dead, and another is that humans have completely lost their purpose.

The last one refers to a world in which AI systems are more creative than humans and can perform all the jobs. In that reality, it’s not clear what humans would do to contribute, Yampolskiy said, echoing some concerns about whether AI will start to take humans’ jobs.

Most people in the field acknowledge some level of risk with AI, but they don’t think it’s as likely that things will end badly. Elon Musk has predicted a 10-20% chance that AI will destroy humanity.

Former Google CEO Eric Schmidt has said the real dangers of AI, which are cyber and biological attacks, will come in three to five years. If AI develops free will, Schmidt has a simple solution: humans can just unplug it.

Read the full article here

Share.
Leave A Reply