“The world-destroying terminator is not coming”

The world destroying terminator is not coming

Two influential parties highlighted the threats posed by artificial intelligence on Wednesday.

Owner of Tesla and Twitter Elon Musk and numerous other technology influencers warn with an open letter about the dangers of artificial intelligence. (you switch to another service) The letter circulating online has been signed by Apple’s co-founder, among others Stephen Wozniak and a world-renowned Israeli historian Yuval Noah Harari.

The letter calls for a six-month break in the development of the new technology, so that its applications cannot, for example, spread false information around the world.

The signatories of the letter are also concerned that artificial intelligence may develop uncontrollably and eventually replace humans. This would lead to the destruction of the current civilization.

– Such decisions (about the future of the world) should not be transferred to technology leaders who have not been voted in by anyone, the letter warns.

On Wednesday, the American investment bank Goldman Sachs presented another threat picture.

The bank’s economists predicted that artificial intelligence will end the jobs of up to 300 million people in the world. For example, some of the work of lawyers can be automated.

Researchers are surprised by the warning

Finnish artificial intelligence researchers are a little surprised by the warnings.

The threatening images painted now seem unrealistic, professor Petri Myllymäki with.

– There is no world-destroying independent terminator coming, Myllymäki calms down.

Myllymäki has studied artificial intelligence since the end of the 1980s at the University of Helsinki, and according to him, the threats related to the application of artificial intelligence have been known for a long time.

– Behind all artificial intelligence is a person and artificial intelligence is under his control, Myllymäki assures.

A professor of social ethics at the University of Helsinki who is familiar with artificial intelligence Jaana Hallamaa he doesn’t worry about terminators either.

As long as artificial intelligence does not independently control key weapon systems and build production facilities on its own, there is a way to conquer the world.

– One can legitimately ask how exactly artificial intelligence could suddenly take over the world? However, it is only a program that works inside a computer or network, Hallamaa thinks.

Professor Hallamaa is a member of the ethics board of the Finnish Center for Artificial Intelligence (FCAI) and he is developing the use of artificial intelligence in public administration.

According to Hallama, the risks of artificial intelligence are primarily related to how people will use it.

– For authoritarian countries like China, artificial intelligence offers unprecedented opportunities to monitor and subjugate their own citizens, says Hallamaa.

Listen to the News podcast that dives into the depths of artificial intelligence.

Artificial intelligence reflects the material fed to it

The use of artificial intelligence is often associated with, for example, the issue of equality. If, for example, an employer implements artificial intelligence when hiring new employees, the artificial intelligence can do so in a discriminatory manner.

The problems related to the use of artificial intelligence all already exist in our society, but now they are moving to a digital form, Petri Myllymäki describes.

Professor Hallamaa is on the same lines. According to him, the image of artificial intelligence as some kind of pure intelligence is wrong.

– Artificial intelligence works on the basis of the material fed to it, and this material largely comes from open and developed countries, not China or even Africa. This limitation of source material can significantly weaken the quality of artificial intelligence, says Hallamaa.

Professor Myllymäki considers the rapid spread and duplication of artificial intelligence products to be worrying.

Artificial intelligence can, for example, quickly create thousands of made-up people on social media, and they can spread, for example, false news with their mass power. The political and social effects can be unpleasant.

– Scalability and speed are the real problems, says Myllymäki.

Hallamaa compares artificial intelligence to another technological change in recent years, social media.

– The need to regulate the algorithms of social media companies has been slowly awakened in, for example, the EU, and the same development will probably be seen with a delay also with artificial intelligence.

Myllymäki is a little amused that artificial intelligence is being talked about publicly right now. The reason is perhaps that new free applications have now become the wonder of the whole nation.

Artificial intelligence has become a tool for all cell phone owners this winter. The Microsoft-based Chat GPT can write an essay or do other tasks on behalf of the student. With it, you can, for example, make a summer vacation plan on your own behalf.

Social effects are difficult to predict

Professor Minna Ruckenstein The University of Helsinki follows the general discussion with concern. In his opinion, one should be careful about what artificial intelligence is used for – not about artificial intelligence itself.

Artificial intelligence can be a societal problem if misused.

– It is increasingly difficult to distinguish fake news from real news, says Ruckenstein on the phone.

Professor Ruckenstein hopes that the use of artificial intelligence would be carefully considered. It doesn’t fit everywhere.

– No one can yet understand what will follow from the introduction of artificial intelligence in different tasks, says Ruckenstein.

New artificial intelligence applications for all people have now brought into view technology that professionals have previously used out of sight of amateurs.

– When you try Chat GPT, you notice that its output is quite average, says Ruckenstein on the phone.

Ruckenstein still finds the engineering behind new artificial intelligence applications impressive.

– However, you have to check that the information produced by the artificial intelligence application is correct, Ruckenstein reminds.

The user of artificial intelligence programs should therefore be alert and critical. Ruckenstein points out that evaluating the information produced by artificial intelligence requires expertise. AI itself doesn’t care if something is true.

But what about that artificial intelligence becoming self-aware and trying to destroy humanity?

In Hollywood movies, a machine that has become too intelligent is always first afraid that people will try to turn it off. Overwhelmed by this feeling, the machine begins a fierce battle against humanity.

– Humans are masters at humanizing everything, even computer programs, sighs Professor Jaana Hallamaa.

yl-01