As its development accelerates, AI arouses growing fear both by its scale and its power. And that’s just the beginning. The main fear often expressed is that it does not know how to explain how it achieves a given result, and that it is therefore dangerous. How can we use non-human intelligence if we don’t know how it works? And yet this is what we have been doing for thousands of years… with dogs.
The oldest identified dog is approximately 16,000 years old. Dogs come from wolves that we have domesticated. They were gradually transformed by successive selection over the millennia. The dog is therefore an artificial being, a creation of man; it did not exist in nature. And he is an incredibly useful being. Very early on, it was used for hunting and guarding. But also to keep us company. Thus, man has not only created extraordinary objects – spears, clothes, etc.; -, he also created artificial intelligences as early as the Upper Paleolithic.
But dogs can be incredibly stupid, far less intelligent than many animals. They fail at very simple tasks, which a crow performs without difficulty. Sometimes it’s like banging your head against the wall to see their stupidity. And it’s difficult to know how they think, if this term can be applied to them. But ask them for something that fits into what we’ve been patiently making them for so many years, and everything changes. They bring you a flock of single sheep in a matter of minutes. They sense when you have a problem. They are with you for the hunt. They warn of danger. They protect you. They are a source of well-being. They guide the blind. And what’s more, they love us. Basically, with the dog, we have created specialized intelligence for vital tasks. We don’t know how it works, it can be very dangerous, but it is remarkably useful if used correctly.
Well, AI is like a dog. We don’t need to know how it works for it to be useful. Both are tools. As with any tool, you have to know how it can be used, where it is useful, and where it is useless, or even dangerous, and you have to use it well otherwise there can be accidents. The hammer is perfect for driving nails, mediocre for driving screws, useless for digging your garden, and dangerous for your fingers. A tool is remarkably effective within its scope, pathetically useless outside of it. The same goes for the dog, and therefore for the AI.
AI is simply a different way of creating artificial intelligence. The dog is good, but with it we are subject to evolution to improve it, that is to say it takes generations to obtain interesting traits. With the development of genetics and technologies like CRISPR, which makes it possible to modify the genome of a cell, we can imagine going much faster. But AI is software, and there the evolution cycles are no longer in hundreds of years, but in tens of weeks. We can finally free ourselves from evolution to develop non-human intelligence.
The often-advanced argument that AI must be “systematically explainable, auditable and transparent”, as I recently heard at a conference, is therefore wrong. To demand this from AI is to give up using it, just as our ancestors should have given up using the dog, incomprehensible, thus depriving themselves of a tool which has considerably facilitated their lives, and probably their survival. Let’s not make this mistake. As with the dog, let’s understand where it can be useful, what it can do, its advantages and its limits, and let’s not ask more of it.
*Philippe Silberzahn is a professor of strategy and entrepreneurship at emlyon business school.