LinkedIn is unfortunately not immune to scams! Fake profiles are invading the professional social network, displaying profile photos generated by artificial intelligence and realistic, but bogus biographies.
Like any good self-respecting social network, LinkedIn is a popular platform for hackers and other bots – automated computer programs that simulate the behavior of a human person. However, the platform had so far managed to maintain a serious and reliable image – a false sense of security perfect for the user to let his guard down. It must be said that, in the professional environment, LinkedIn is an essential tool for keeping in touch with colleagues and clients, finding a new job and keeping up to date with news in your sector of activity. However, people with bad intentions create many fake profiles using artificial intelligence and copy the blurbs of other accounts – real ones this time. The perfect combination to appear larger than life, and which turns out to be rather problematic for many HR managers and group administrators – who must validate a profile before accepting it.
Deepfakes on LinkedIn: fake accounts generated using artificial intelligence
The website KrebsOnSecurity recently conducted several investigations into the proliferation of fake profiles on LinkedIn. Hamish Taylor, the administrator of a group with nearly 300,000 members, claims he alone blocked nearly 13,000 fake accounts in 2022, some of which were “cynical attempts to exploit humanitarian aid experts and help in a crisis”, as supposed experts, in disaster recovery, following the recent hurricanes. “Swarms” that have been multiplying since January 2022. Mark Miller, the administrator of the DevOps IT group, notices that the fake profiles try to register in the different groups in successive waves: “When a bot tries to infiltrate the group, it does so in waves. We see 20-30 requests coming in with the same type of information in the profiles.”
Group administrators are not the only ones to suffer from these fake accounts, companies too! Some have had the unpleasant surprise of discovering several fairly similar profiles claiming to work for them, when they are not even real people. Tests have been carried out on their photo, and reveal that they look like other photos published on the Internet, but never exactly match them. It is therefore very likely that they are deepkakes – an overlay of photos made by an artificial intelligence (AI). Several readers have pointed to a likely source: the website thispersondoesnotexist.com, which uses AI to create unique portraits in the blink of an eye.
Deepfakes on LinkedIn: motivations still unclear
Fake accounts can have a wide variety of – but equally dishonest – uses. Fake job offers to steal information, recruitment scams, classic phishing… But some are more imaginative. Thus, fake profiles can sometimes be linked to so-called “pig butchering” scams, in which hackers convince their victims to invest in cryptocurrency exchanges, and end up seizing all the funds when cashing out. More surprisingly, the cybersecurity company Mandiant – which has just been acquired by Google – told Bloomberg that hackers working for the North Korean government had copied resumes and profiles on major job posting platforms LinkedIn and Indeed, in order to get jobs at cryptocurrency companies.
However, the bots spotted by KrebsOnSecurity are a different case, and their motives remain unclear. Indeed, they don’t seem to pull off any scams, even when handed the stick. They don’t respond to messages or post anything. Rather, fake accounts appear to be created and immediately abandoned. Hamish Taylor finds this rather worrying: “it looks like someone is setting up this massive botnet to repeat and amplify a propaganda message through mass publication at the appropriate time.” Anyway, fake profiles are a real scourge, and we hope that LinkedIn will react accordingly. Bloomberg also notes that the platform has so far managed to avoid scandals in this regard, unlike Facebook and Twitter. But things could change…