According to experts, AI means the extinction of humans

Earlier we already had a burning letter from experts telling the government that real action must be taken on the development of artificial intelligence: if necessary with a kind of break to put things in order. Now there is something new: AI experts have now said that AI is capable of making humans extinct and that it should be made as important and threatening to politicians as, say, a pandemic or nuclear threat.

Statement of AI risks

The statement is called the Statement of AI risks and it reads, among other things, that limiting the risks of human extinction due to AI should be a priority all over the world. Although the experts still see it as a reasonable possibility, it is creeping closer and closer: the experts themselves can hardly keep up with how AI is developing. While they themselves belong to the group that leads the innovations within artificial intelligence. But that seems to be becoming more and more ‘suffering’. The fact that an emergency bell has been sounded for the second time in a very short time speaks volumes.

By the way, there was another alarm bell recently, albeit from only one man: one of the founders of AI development, Geoffrey Hinton of Google, left his job because he could no longer handle it. He left to warn about AI. It’s starting to come to mind more and more often: “We’ve created a monster”. But perhaps that is too lenient: as humanity, we can usually master a monster, but a self-learning system that is already smarter than many people, it naturally continues to learn in the meantime. High places in the AI ​​world, including the CEO of OpenAI (the company behind ChatGPT) and Yoshua Bengio, who was also roughly at the cradle of AI, have put their names under this manifesto.

Who warns whom?

It seems a bit strange to warn people in politics and somehow hold them accountable, when the people who do that made it themselves. Rutte and associates do not have to deal with AI on their computers. On the other hand, it is also logical that governments are approached: they must make laws and regulations that should help prevent abuse and keep AI under control. The disadvantage is that politicians both know little about it and always take a long time to make decisions. If AI is moving too fast for the makers of AI, let alone how much it is moving too fast for politicians.

The question is whether these fire letters help. Shouldn’t experts talk to each other about developing AI? Shouldn’t we counteract the craziness around this and the fierce competition? Somewhere they do: many experts also say that many people do ‘doom’ about AI, but they more often discuss those concerns in private at home than they throw it on the table in the board meeting.

The danger of AI

There is of course a lot of pressure on tech companies. But yes, that pressure should not mean that we will end up in a kind of Terminator-like world, where important information is no longer shared via the internet, but we will go back to carrier pigeons to warn the enemy in that way. A year ago this was a strange fabrication, but let’s be honest: we don’t know where AI is going. Whether it helps to write burning letters, that is the question: after all, who feels addressed? And who should feel addressed? Maybe that will soon be AI itself…

Laura Jenny

When she’s not tapping, she’s traveling around the wonderful world of entertainment or some cool place in the real world. Mario is the man of her life,…

Implement tags. Simulate a mobile device using Chrome Dev Tools Device Mode. Scroll page to activate.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

x