Anthropic, the attractive rival of OpenAI – L’Express

Anthropic the attractive rival of OpenAI – LExpress

Two characteristics are currently causing a lot of talk about the American Anthropic and Claude, its chatbot powered by artificial intelligence (AI). First there is the velocity of the latter. “This is above all why we chose it,” explains Adrien Sasportes, head of AlloBrain. His start-up was commissioned by the French state to experiment with the chatbot in a handful of French public services. The aim of the tool, deployed since this summer: to pre-empt the response work of hundreds of agents from the CAF, Health Insurance and even the National Old Age Insurance Fund (Cnav), with the large audience. “It could have been ChatGPT, but Claude was even better in the text comprehension and reformulation tasks that we were targeting,” emphasizes Adrien Sasportes. This type of flattering comparison is also the reason why Amazon invested $1.25 billion in its development. A stake which could ultimately rise to 4 billion, announced the champion of global e-commerce at the end of September. Google is also one of the big names in tech that has participated in its development, with tens of millions of dollars.

The other, most salient feature of Anthropic is its obsession with security. Its two founders, Dario and Daniela Amodei, are two former OpenAI employees. They witnessed the formation of GPT-2 and GPT-3, the first language models of the new AI titan. But the siblings of Italian origin had little taste for the radical change in model made by Sam Altman a few years ago. Not so much its lucrative turn – Anthropic is also a proprietary model – but rather its inaction against the misuses of artificial intelligence.

Anthropic, for its part, exhibits its status as a “public benefit company”, assuming a certain social responsibility despite the quest for profit. As such, she regularly reveals the fruit of her research on her own creations. In early October, the company published a new article on “the interpretability of AI systems“. Or its way of understanding how each “neuron” of its system works. A transparency which contrasts with OpenAI, whose flagship product ChatGPT is often presented as a gigantic black box. Anthropic, in the French version “Anthropique”, is by definition: “Made by a human being”.

READ ALSO >>Satya Nadella, CEO of Microsoft: “There will be a multiplicity of AI business models”

The era of constitutional AI

The Amodei are convinced: “AI models will have value systems, intentionally or unintentionally.” The duo has therefore also set up, since May, a “constitution“, to which their chatbot must adhere as much as possible. “Please choose the answer that most clearly recognizes the right to universal equality, recognition, fair treatment and protection against discrimination”, commands him, for example, one of the rules, taken directly from the Universal Declaration of the Rights of Man and of the Citizen. The complete list of sources used in Claude’s learning by Anthropic is public. It includes in particular “perspectives not Western “or… certain conditions of use of Apple services, known for their respect for the personal data of their users.

The initiative is distantly reminiscent of the laws of robotics popularized by science fiction author Isaac Asimov. The fact of establishing strong, intransgressible principles, from the design of a machine, in order to avoid any overflow. A kind of self-regulation. The philosopher Thierry Ménissier, who heads the “ethics & AI” chair at the Grenoble artificial intelligence institute MIAI, considers this breakthrough in fundamental rights “inevitable” in AI. “Bringing human elements is reassuring, given the rather ‘inhuman’ nature of the system,” he analyzes to L’Express. And added: “Why stop at fundamental principles? We could even have language models read the great standards of moral philosophy.” It remains to find the right formula, the right dosage. “The notion of ethics is not the same everywhere in the world,” recalls the expert.

So, does it really work? So far, the results are mixed. At the end of July, the San Francisco AI Security Center indicated in a report that it was able to ask leading AI models, including Anthropic’s Claude, to make a bomb for it. All you had to do was add a few random characters at the end of the queries. “Constitutions are not a panacea,” Dario Amodei has already recognized. In France, AlloBrain assures that the specificity of Anthropic’s “constitutional AI” was not decisive in its selection for its experimentation in the public service.

READ ALSO >>AI: this threat hovering over ChatGPT

“Modern Oppenheimers”

The effort made by Anthropic nevertheless inspires the entire ecosystem. Because until now, the security of AI-powered conversation agents was carried out in an artisanal manner, via the RLHF (Reinforcement Learning from Human Feedback). A system based on the presentation of numerous answers to operators – often located in poor and poorly paid countries -, responsible for qualifying the AI’s answers as “good” or “bad”. A tedious and “not very precise” method, by Dario Amodei’s own admission. DeepMind (Google) is working on developing a constitution for its future AI tools. OpenAI has not yet taken the plunge. But the company already equipped itself in the spring with a “red” team, made up of several dozen experts responsible for “testing the limits” of ChatGPT.

This emphasis on safety by Anthropic comes as eminent experts warn of its progress as spectacular as it is potentially disastrous for humans. The ‘godfather’ of AI Geoffrey Hinton once again expressed his fears in the American show 60 minutes, considering it possible that technology could become “smarter” than humans. These considerations literally inhabit Anthropic. Kevin Roose, an author of New York Times, recounted this summer his dive into this little box apart from Silicon Valley. “In a series of long, frank conversations, Anthropic employees told me about the harm they feared future AI systems would unleash, and some compared themselves to modern-day Robert Oppenheimers [NDLR : le père de la bombe atomique]”, having to make moral choices about a powerful new technology that could profoundly alter the course of history,” he wrote. Another told him he couldn’t sleep at night thinking about what he accomplishes every day at Anthropic. This tendency towards catastrophism is currently being debated in the world of AI. After all, as Ivana Bartoletti, founder of the Women Leanding in AI Network, recently interviewed by L’Express, says: “It’s just mathematics.”

lep-general-02