The worrying “emerging” behaviors of AIs

The worrying emerging behaviors of AIs

They sometimes scare these artificial intelligences. By the very admission of their designers, these are black boxes whose operation it is impossible to understand, even less to reconstruct the reasoning.

Yet these AIs often elude their creators. On many occasions, they have demonstrated astonishing capacities for improvisation, developing plans that go completely outside the framework that had been set for them, sometimes for the better, as when it comes to finding original solutions to a problem. complex, sometimes for darker purposes. The last few months have revealed that generative AIs – ChatGPT, Bard, LambDA or others – are capable of deceiving, concealing, manipulating human beings, as soon as they seek to oppose the machine.

These are called emergent behaviors. And it’s a bit chilling.

To talk about it, Control F invited Christophe Tricot, creator of LaForgeAi, and who is familiar with these major language models. With him we will try to understand how aberrant behavior can emerge from these models which are in principle only statistical machines.

Listen to this episode and subscribe to Control F on Apple Podcasts, Spotify, Castbox, Deezer, Google Podcast And Amazon Music.

The team: Frédéric Filloux (writing and presentation), Jules Krot (editing and production) and Marion Galard (work-study).

Music and dressing: Leonard Filloux

Picture credits: Jirsack/iStockphoto

Logo: Jérémy Cambour/L’Express

How to listen to a podcast? Follow the leader.

lep-general-02