I’m an AI expert – everyone on earth is going to DIE if we don’t stop developing bots fast and we should stop development NOW

A TOP AI expert has issued a stark warning about the potential for global extinction that super-intelligent AI technology could bring.
Eliezer Yudkowsky is a leading AI researcher and claims that “everyone on earth will die” if we don’t stop the development of superhuman intelligence systems.

1
The 43-year-old is a co-founder of the Machine Intelligence Research Institute and (MIRI) and wants to know exactly how “terribly dangerous this technology” is.
He fears that when it comes to humans versus superhuman intelligence, the result is a “total loss,” he wrote in TIME.
As a metaphor, he says, this would be like an “11th century trying to fight the 21st century”.
In short, people would lose dramatically.
On March 29, senior OpenAI experts submitted an open letter titled “Pause Giant AI Experiments,” calling for an immediate six-month ban on training powerful AI systems for six months.
It was signed by the likes of Apple’s co-founder Steve Wozniak and Elon Musk.
However, the American theorist says he declined to sign the petition because it “requests too little to solve it”.
The threat is so great that he argues that AI annihilation “should be considered a priority over preventing full nuclear replacement.”
He warns that the most likely outcome of robotic science is that we will “create AI that doesn’t do what we want and doesn’t care about us or sentient life in general.”
We’re not ready, Yudkowsky admits, to teach the AI how to be caring because we “don’t know how right now.”
Instead, the sober reality is that in a robot’s mind, “You are made of atoms that it can use for something else.”
“If someone builds too powerful an AI under the current conditions, I expect every single member of the human species and all biological life on Earth to die shortly thereafter.”
Yudkowsky points out that currently “we have no idea how to determine if AI systems are self-aware”.
This means that scientists could inadvertently “create digital minds that are truly conscious,” and then slip into all sorts of moral dilemmas that conscious beings have rights and shouldn’t be owned.
Our ignorance, he swears, will be our downfall.
Not knowing if the researchers are developing confident AI, he says, “You have no idea what you’re doing and it’s dangerous and you should stop doing it.”
Yudkowsky contends that it could take us decades to solve the problem of safety in superhuman intelligence — that safety is “not literally killing everyone” — and in that time we could all be dead.
The key point of the expert is: “We are not prepared. We are not on course to be prepared in a reasonable timeframe. There is no plan.
“Advancement in AI capabilities is far ahead of advancement…in understanding what the heck is going on in these systems. If we actually do that, we will all die.”
To avoid this earth-shattering catastrophe, Yudkowsky believes the only way is to halt all new AI technology training worldwide without exception, including government and military.
If anyone breaks this agreement, the AI expert proves his sincerity by saying that governments “should be prepared to destroy a rogue data center by airstrike”.
“Make it clear that anyone who talks about arms races is a fool. That we all live or die together is not politics, but a fact of nature.”


Yudkowsky gets his point across, ending with, “If we continue like this, everyone will die”.
“Shut it down.”
https://www.thescottishsun.co.uk/tech/10455817/ai-expert-stop-bots-halt-development-now/ I’m an AI expert – everyone on earth is going to DIE if we don’t stop developing bots fast and we should stop development NOW