The AI ​​alarms warn about the wrong things

Fact: Artificial Intelligence

The idea of ​​artificial intelligence (AI) is to imitate the human brain’s ability to acquire knowledge, draw conclusions, plan, solve problems or interpret results.

AI is common in fiction and often takes a form that is far from where the technology is in reality. A human-thinking robot, general artificial intelligence, is considered in the scientific sense basically impossible. However, AI is becoming increasingly common in everyday life in everything from so-called chatbots that can be used in customer services to analyzing AI that can arrive at a diagnosis or optimal route from a large amount of data.

The research area was named in the 1950s and has elements of mathematics, information technology, philosophy, linguistics, psychology, cognitive science and brain research. The tools are usually algorithms of various kinds, where an algorithm can be described as a systematic procedure that in a number of steps indicates how a certain problem should be calculated or analyzed. For example, a mobile map service that provides suggestions for the fastest car route to a specific destination based on available data on distance and traffic situation.

Source: National Encyclopedia, Nature

Artificial intelligence (AI) will take over the world. No, exterminate humanity. Or? Recently, interest in, and reporting on, AI has exploded.

In its wake, several warning flags have been raised – the latest by a long line of researchers and power players in the field, including Microsoft founder Bill Gates and Open AI CEO Sam Altman, whose company is behind the super-popular Chat GPT.

The means that the risk of extinction via AI is to be equated with threats from future pandemics or nuclear war.

“It’s clear that we have to be careful, but I’m a little skeptical when they mix in such clear threats,” says Daniel Gillblad, head of research at AI Sweden, which is the national Swedish AI center.

Gillblad is supported by Virginia Dignum, professor of responsible AI at Umeå University:

— I think it is spreading fear unnecessarily. Also made by those who drive the development. If they are really worried about the development, they only need to look at themselves in the mirror, she says and continues:

— It’s easy to scare people. I think they need to take responsibility instead. This dystopian picture they paint is highly unlikely, and very far-fetched, if ever. There are other dangers much more here and now, such as systems becoming biased, climate change, and those already in power gaining even greater influence.

Microsoft has invested multi-billion sums in AI technology. Among other things in the company Open AI. But any “human robot” is not imminent, either for Microsoft or anyone else. Archive image. Human consciousness

Both Dignum and Gillblad point out that it has become easy to attribute characteristics to various AI services that they do not have.

Bringing a service like Chat GPT to life is part of the current problem, also believes Magnus Johnsson, cognitive researcher at Malmö University:

— The distance to what we call general artificial intelligence is longer than most people imagine. I’m a little surprised that some big names are making a big difference in the development we’re seeing now, he says.

Johnsson believes that the “consciousness” that, for example, Chat GPT displays is nothing more than the next logical step for that particular service based on its underlying technology.

— It’s a language model, it just tries to follow the next step in the conversation. It is not conscious in the same way as a human.

Dangers here and now

Gillblad, Dignum and Johnsson disagree on whether AI will ever be able to develop “consciousness”. However, all three of them see a need to frame the AI ​​area in a clearer way here and now.

— There are many different challenges because there are many different applications for AI. Generally, such a bias is that AI is developed with the right data so that problems do not arise when something is then used on a larger scale, says Daniel Gillblad.

— Not everyone needs to understand AI. I don’t understand how an airplane can fly, but I trust that when I get on board, things will work out. We must be able to have that kind of trust in the face of AI. And then we need more people, including politicians and governments, who can hold the developers accountable for the choices they make, says Virgina Dignum.

— AI in itself is not dangerous. But it can be dangerous if you see it as a tool, like nuclear weapons, that can end up in people’s hands. If it takes a lot of money or power to acquire a certain tool, there can be problems. But that’s how it’s been since the time of flint axes, says Magnus Johnsson.

Sam Altman, CEO of Open AI, which is behind the Chat GPT service, which has increased interest in AI avalanche-like in recent months. Archive image.

nh2-general