AI and the “polystyrene block”: the clash of two schools of thought

AI and the polystyrene block the clash of two schools

You are driving on a highway. A strong crosswind carries dust and twigs. Suddenly, a block of polystyrene the size of a shoebox leaps across the road.

This is how your brain analyzes the situation. He created a kind of model of the environment: the highway, driving, based on an experience where 95% of the situations are routine and 5% new. Like the polystyrene block. There, it is other cognitive functions that take over: reflex, sudden attention, ultra-rapid analysis of the event in which experience enters, the capacity for deduction. In this case, the decision is quickly made: a small parallelepipedic object, matte white, light enough to slip on the road, at a glance, it’s without danger.

Here’s how an artificial intelligence algorithm embedded in a car sees the matter. First of all, he recognize what he was taught. Besides, he doesn’t have the vision of a fighter pilot; tone on tone, backlighting, that’s not his forte. A Tesla was seen slamming into a bright white tractor-trailer that its AI hadn’t seen. Ignoring the alerts telling him to respect the obligation to keep his hands on the wheel, the driver was watching a video. He did not see the end of it and perished in the collision.

In our case, a small white block on the dark gray roadway is fine. But, again, the learning of the algorithm is approximate: he was shown thousands of images and videos featuring a rock landing on a road, an object falling from a truck, a suicidal boar, etc. . The polystyrene block: never seen. For the AI, it is 89% comparable to a rock that must trigger either a sudden braking or an avoidance. In both cases, there is a high risk of an accident.

The symbol versus the data

The two situations mentioned above illustrate typical AI approaches to simulating intelligence: the first is based on symbols (for represent reality), the second on data (for measure reality). In the second case, you really need a lot of data to have convincing results.

In his excellent book AI reboot. Building Artificial Intelligence We Can Trust, New York University professor Gary Marcus attacks models based solely on the use of non-symbolic data, which are at the heart of deep learning. He cites comical examples, such as a parking sign studded with stickers in which an artificial intelligence sees “a refrigerator, filled with food and drinks”. One can imagine the consequences of such a monumental error on driving a vehicle or analyzing a medical scanner… “Current systems [utilisant l’apprentissage machine] work well in a narrow field, says Gary Marcus. But they can’t be trusted with anything that wasn’t precisely anticipated by their creators…” In any case, she does not understand nothing, in the first sense of the term, it probabilizes a problem: “given what I know, on the basis of the millions of words with which I have been force-fed, the answer must be xy“. If the facts correspond to the statistical probability, all is well, if not, it is an accident.

Because, pushed to its limits, an AI driven by data can become delirious. Thus, because it has swallowed all the novels dealing with cyber and technological attacks, an AI questioned about these malevolent fantasies will regurgitate without thinking of evil (without thinking at all), but with precision, all the recipes for destroying society. .

In the same way, the ingestion of Harlequin’s marshmallow romances can arouse unbridled sentimentality. It is enough that the conversation engages on this ground, as with this journalist of the New York Times whom Bing, Microsoft’s now conversational search engine, declared his flame and advised to divorce, simply because the user reluctantly touched a semantically sensitive chord. And when you consider that the ChatGPT corpus (450 billion words) includes the wall street journal than the YouPorn forums or the exchanges on the OKCupid dating site, we can imagine the potential for distressing situations.

lep-life-health-03