“In generative AI, new players are making astonishing progress” – L’Express

In generative AI new players are making astonishing progress –

The Web would never have become what it is without a turbulent community: that ofopen source. Behind this name, battalions of tech professionals who have made the collective bet: they share their work so that everyone can study it and build on it. There is no doubt about itopen source that the next most interesting developments in generative artificial intelligence (AI) will emerge. Meeting with Mitchell Baker, president of one of the pillars of the Web and the community open source : the Mozilla Foundation.

L’Express: How has the Web changed over the last ten years?

Mitchell Baker: The main development is that, thanks to it, everything is now interconnected. We were talking about this fifteen years ago, but today it is an indisputable reality. All the complex problems of the real world now also exist online. There have been several major developments, several waves. Those of social networks, that of video… Let’s not forget, in fact, that in the early days of the Web, there was only text. You needed specific software to watch a video. We had no “voice”, the telephone was expensive. And one day, in the world of open source, the chief technical officer from Opera floated the idea of ​​adding audio and video to the web browser. Everyone thought it was a great idea and this standard was adopted.

READ ALSO >>Satya Nadella, CEO of Microsoft: “There will be a multiplicity of AI business models”

The Web has an incredible capacity to mutate and incorporate very new things. This will continue, we will undoubtedly succeed in integrating new forms of security, encryption and new uses. For the moment, generative AI is added in successive steps, via extensions, but we think that ultimately, it will become a brick at the heart of the system. We hope that this will come in particular from platforms open source with open and interoperable standards (Editor’s note: capable of communicating and operating together). There is a whole generation that does not fully appreciate what interoperability has brought to the web, because platforms such as Facebook, TikTok or Amazon are not. But it is interoperability that has allowed the Web to develop so richly and universally. If Mozilla or Opera had launched video only on their products, the Web we know would be very different, and much less sophisticated.

Is the AI ​​sector an area in which players with few resources still have a chance?

Before March, only a few companies had clout in the generative AI sector. In April, a lot of information about Meta’s Large Language Model (LLM) was leaked. Since then, Meta has self-published other material. This leak generated a large wave of developments open source. And this teaches us several things. In the area of ​​large general language models, the amount of resources required to create them somewhat limits the number of participants. However, there are many companies working to develop new general-purpose LLMs. Some are backed by venture capitalists, others rely on the community open source. And many are making astonishing progress, with far fewer resources than the original players. We don’t yet know how far they can go, but it will be interesting to follow. Furthermore, we do not always need, or even benefit, from using a large general-purpose language model trained on the best and worst of the Internet. In some cases, it is preferable to have a tool trained on your company’s data only. And in this sector, small players have every chance against the big ones.

What is Mozilla doing in the field of artificial intelligence?

Mozilla.ai is a new organization studying what can be done to build more reliable AI. We are not looking to launch commercial products, which leaves us the freedom to study long-term questions. We work with research institutes and also a lot with the community open source of AI.

What first avenues are emerging to improve the reliability of AI?

We need to audit these systems and make them as transparent as possible. Eliminate all bias and give users control over AI training data. Currently everything you do is sucked into Big Language Models (LLMs) but one could imagine that your business, or even you as an individual, could use the power of LLMs over your personal data in a controlled way. By having it analyze your texts, your Internet searches, your emails, you could have insights and proposals adapted to your needs. But it is important that you maintain control over your initial data.

READ ALSO >>Generative AI: “There is nothing magical in it, only mathematics”

How will AI change the uses of the Web?

Even before generative AI, the amount of information on the Internet was not understandable by humans. He made his way through it thanks to search engines such as Google, and thanks to the algorithms of social networks such as Facebook or TikTok. AI will increase the volume of content, but we will have new ways of sorting it, of extracting meaning from it. The important thing is that the individual is put more at the center of the way sorting is carried out. Nowadays, on social networks, you can make some adjustments and try to swipe differently to modify the kind of content that is offered to you, but your influence on what you see is too weak. This must be changed.

The Mozilla Foundation is concerned about the directions taken by the French bill to “secure and regulate the digital space” (SREN). For what ?

Mozilla has always been in favor of a certain degree of digital regulation. It is, among other things, through this means that society can guide developments in this sector and their impacts on it. But we are very attentive to the unexpected consequences that technical changes can have. As well as solutions which risk creating damage or unforeseen events greater than the initial problem. This is what is happening with digital identity. Everyone wants to protect children on the Internet. But if we want to create a system that is truly capable of reliably verifying that children under a certain age are not visiting certain sites, we are creating a system that actually goes well beyond the initial framework. It would be a tool with very deep tracking and monitoring capabilities for all Internet users.

Where does freedom of expression begin and end? The question has always been complex. Does it arise differently in the online world and in the physical world, and should it therefore be regulated differently?

Freedom of expression is a concept. Even if some believe that they should have the freedom to say whatever they want, the fact that they live in social communities, rub shoulders with relatives, colleagues, fellow citizens, strongly influences what they are really susceptible to. to say. Online, this influence is less because we don’t know who is behind the screen. The consequences of what we say are less direct. And some take advantage of this to be as violent, nasty and horrific as possible. We are better when we are accountable. It has taken us hundreds of years to regulate free speech in the real world, and that work clearly needs to be used to work on regulating it online. But, in fact, it is undoubtedly also necessary to take into account certain particularities of online life. Combat situations where a large number of Internet users gang up on another, for example, because this phenomenon occurs much more frequently online than in the physical world.

Hateful content is a problem in the digital space. Misinformation is another. Where should the red lines be placed in this area?

What forms of opinion do we want in a society and which should we prohibit? I don’t know if that can be answered at the moment. You need to experiment a lot more on social media and have the opportunity to try more varied competing spaces. Currently in “Fediverse”, a Mozilla instance that we launched on the Mastodon social platform, we are experimenting with being less tolerant of threatening and hateful content. Other platforms, such as Truth Social (Editor’s note: the network launched by Donald Trump) operate at the other end of this spectrum. By experimenting and trying spaces with varied rules, society will perhaps decide by banning certain types of content. Or by allowing everyone to say what they want but prohibiting social media algorithms from highlighting certain types of content. In either case, it will be necessary to determine which types of content are treated differently.

Are the changes made by Elon Musk to X, formerly Twitter, a harbinger of broader developments in social networks?

Before the arrival of Elon Musk, Twitter was seen as the public square, the agora of the Internet. But do Internet users still want these platforms built like large public forums? Or do they now want smaller, healthier communities? What we are experimenting with on the Mastodon social platform will soon be available more widely, in order to seek answers to these questions.

lep-general-02