Generative AI: “There is nothing magical in it, only mathematics”

Generative AI There is nothing magical in it only mathematics

And four. After Microsoft, Google and Meta, Amazon is the latest web colossus to advance its pawns in generative AI. The firm recently announced a major partnership with Anthropic, one of OpenAI’s big rivals in which Microsoft has invested. Further proof that generative artificial intelligence will be central to the economy of tomorrow. What progress does it bring, what are its risks and can we even prevent them? Interview with Ivana Bartoletti, Founder of Women Leanding in AI Network, Head of Privacy and Data Protection at Wipro and Visiting Cybersecurity Researcher at Virginia Tech.

L’Express: The new generative AI has amazed the public. Is their operation more down to earth than we think?

Ivana Bartoletti: The conversational aspect of a ChatGPT is impressive, but these systems actually represent the probability of one word following another. This is how they gradually construct their answers. They were trained on an immense amount of data, including a myriad of texts available on the Internet. This is how they learned the semantic links between segments of text. What makes them particularly interesting is that they don’t always opt for the most likely word, there are random variables that make their productions richer. So there’s nothing magical about it, it’s pure mathematics. Behind this conversational appearance, you have statistics.

Ivana Bartoletti, founder of the Women Leanding in AI Network, dissects how generative AI works

© / ivana

Why is the rise of generative AI sparking so many worried reactions and warnings about supposed existential risks?

The doomsaying about AI has contributed to the creation of a mystique around this technology. But there is nothing mystical about AI, only mathematics. The terminology used in the field, it is true, generates ambiguities. We talk about artificial intelligence, but these two terms are debatable. We can debate their “artificial” nature since they are the result of the analysis of human data. The concept of “intelligence” is also questionable. These tools are sometimes more efficient than humans, but only on specific tasks. And it’s not because they are “intelligent” in the way a human can be, but because they have been trained in a certain way.

What can generative AI bring us?

We should not look at these tools through the glamorous prism of tech marketing but through the lens of productivity. These AIs will make employees in many sectors more efficient. In IT development, they will save a lot of time. The photos and videos they are able to produce can also be very useful for marketing departments. And in writing or text synthesis tasks, artificial intelligence gives excellent results. This opens up vast perspectives. Generate contracts or research large bodies of legal documents, for example. Especially since a company can personalize certain AI tools, by training them on its internal data so that they better meet its needs.

What options does a company have that wants to use generative AI?

There are several, each with their advantages and disadvantages. To begin with, it can use existing tools via an API (editor’s note, a software interface). It’s simple, quick. The downside is that this can be risky for the confidentiality of employee and customer data. Another route is to build your AI from scratch, which gives perfect control but requires a lot of time and resources. The third option is to create a personalized version of an existing tool by training it on business data.

What risks can artificial intelligence pose?

It poses very real challenges. A poorly designed system can generate automated discrimination. For example, these systems can reproduce and amplify gender stereotypes. There are also significant risks of misinformation, particularly with deepfakes. Being able to produce these realistic fake videos in seconds is a game changer. For me, this is one of the major risks of AI. But the tendency to humanize the behavior of machines makes no sense. We give them an awareness that they do not have. And when we get lost on this slope, it takes us away from the real debates that need to be had on the risks of AI. Behind the computer problems are the humans who programmed the machine and the choices they made at the time.

How to regulate this sector?

All businesses need to be transparent about how they use data. But this is not enough. Justice and regulators must have the adequate tools to act if certain AI proves to be discriminatory. If a tool affects who gets a job and who doesn’t, it’s crucial. Because these systems can exclude you from important opportunities. We must therefore ensure that they are reliable, neutral and that all due diligence was made. It is also important that everyone can participate in thinking about AI, not just experts. Because artificial intelligence is not just a technical product, it is also a social product. By training AIs on human data you are, in a way, incorporating the whole of society into this technology.

Some AI professionals fear that we will never be able to regulate these tools, because it would not be possible to dissect all of their decision-making mechanisms. What do you think ?

It’s true that AI is sometimes more efficient than humans without us knowing why. And some believe that we should not sacrifice efficiency for explainability. I think the two are not mutually exclusive in reality. There is also a lot of research currently being done on the explainability of artificial intelligence. Because it is essential to know why the machine opts for one action rather than another.

What impact do you think generative AI will have on employment?

The job market will change, that’s for sure. Many tasks will be automated. During previous revolutions, manual professions were the most impacted. Here, what is fascinating is that other professions – artists, developers, accountants, etc. – will evolve. My hope is that companies see this through the prism of productivity, rather than cost and job reduction. Do not try to do the same with less, but to do better with the existing workforce. Especially since machines still make a lot of mistakes. It is important that they are supervised by humans.

lep-sports-01