(Finance) – The advent of thegenerative artificial intelligencein the global technological landscape, has catapulted the capabilities and potential of this technology, but also the ethical challenges and failures, into the spotlight.
Surely the virtuous examples of the use of AI are known and encouraging: in the world of medicine, for example, it was identified a molecule to be tested to cure the idiopathic pulmonary fibrosis in 21 days instead of years of traditional experimentation, while the world of insurance saw client management costs cut by 30% thanks to generative AI-powered platforms.
However, if the successful examples of the use of AI increase, so will i failures of this tool begin to fill the pages of newspapers: Samsung for example, it has seen part of its own spread source code online when some of his employees put it in ChatGPT to optimize it, and we saw the first threat of defamation lawsuit towards an AI, when a famous chatbot wrongly accused an Australian Mayor of corruption.
It is therefore no coincidence that companies such as JP Morgan and Verizon they stuck or strongly limited the use of AI to its employees. Despite these risks, however, a recent Bloomberg poll revealed that almost the half of the companies interviewees is actively working on the development of AI usage policiesin order to maximize results while reducing the risk of damage or information leakage.
The ideal enterprise-wide solution would obviously be to develop your own generative AI systembut in many cases i costs both hardware and software development are still Too high for a technology that is in fact still emerging, and also i times necessary would risk excessively delaying the “technological arms race” which is in fact involving most companies. Indeed, according to research by MIT Sloan Management Review and the Boston Consulting Group, 53% of companies rely on it exclusively to third-party AI toolsthus exposing themselves to risks that can hardly be controlled. This panorama applies both when AIs are openly used by employees (as happens more and more in software houses), and when managers are unaware of their use by their team members (phenomenon defined as Shadow AI) . The research concerned 1,240 representatives of organizations of 59 sectors and 87 countries with an annual turnover of at least 100 million dollars, thus offering a global vision of the phenomenon, and not limited to a few hi-tech companies, as happened less than a year ago.
It is therefore in this context that the concept of RAI – Responsible Artificial Intelligence was born. If in fact the very rapid evolution of AI does not go very well with a responsible use of the same, and if in the current state of things it is not the direct responsibility of the developers to supervise on ethics and strict liability of their own bots (because in fact it would be limiting for a tool that must “learn to learn”), it is indispensable for companies build frameworks in which to include the use of these tools that precisely prevent any damage of an ethical nature and, above all, violation of company policies and secrets.
The need for these programs is strongly felt on all levels of corporate governanceaccording to research by MIT and BCG indeed respondents who hold leadership roles in the management of Responsible AI have increased from 16% at 29%. Despite this, 71% of organizations still do not actively supervise AI implementation processes, and if this number is projected into a context in which, for the reasons mentioned above, 78% of companies rely on third-party AI toolswe realize that the risks to which companies are exposed, from the loss of customer confidence to regulatory problems, are real, numerous and tangible.
The application of RAI framework is therefore essential and, according to the data collected, to apply 5 evaluation methods of the tools used (which in fact should first be analyzed and then constantly monitored), brings the probabilities of identify flaws and problems to 51%, against 24% deriving from a superficial analysis.
The panorama of rules and regulations related to AI seeks to keep up with their evolution, albeit with difficulty (just think of the criticisms, all in all well founded, received by the Italian government when it decided to block access to ChatGPT from our country), with new specific rules that enter into force on an ongoing basis. The companies I am mostly aware (about 51%, according to the survey) and try to anticipate their implementation, applying them even if their sector is not directly affected by them. This happens especially for companies that operate in the financein the healthcare.
Just as it happened for the internet about 25 years ago, today too it is essential that the topic of AI is directly addressed by the administration of the companies, AD in primis: the organizations aware of this issue report in fact a 58% more business benefits compared to the more “distracted” administrations on the subject. As with all the tools that have revolutionized the world of work since the first industrial revolution, a responsible approach their use is indispensable for minimizing risks and turning opportunities into profit actual.