Swedes should make chatbots more reliable

Swedes should make chatbots more reliable

Fewer hallucinations – more facts. Swedish researchers hope that with the help of new technology, you will soon be able to trust the chatbots.

– I definitely believe that we will see increased quality, says Professor Fredrik Heintz.

Chatbots like Chat GPT have taken the world by storm. One problem is that it is difficult to know whether the answers that are spit out are correct – or whether they are made up, so-called hallucinations. The bots can make up fake studies, change numbers and happily fabricate factual information and pass it off as truth.

Making the answers more reliable is something that companies and researchers are working hard on. But it is hard. The underlying technology is so-called neural networks, inspired by the human brain.

Fredrik Heintz, professor of computer science at Linköping University, likens it to the thought of a “grandmother neuron”, a specific neuron in the brain that is activated when you think of your grandmother.

– But that’s not the case, there are lots of places in the brain that react. This representation of the person is distributed in many places in the brain. It’s the same way with these neural networks, he says.

This makes it difficult to answer where the AI ​​gets its inventions from, because the information comes from many places. It’s not about changing an incorrect line of code, as when fixing bugs in software.

Must be able to cite sources

Heintz leads an EU project on creating credible language models. In AI circles, people look hopefully towards retrieval-augmented generation, RAG. This means that the system searches for factual knowledge in a certain database, which the language model uses as a basis for generating its answer. It can therefore also refer to the source, for example with a link.

– The advantage is that you don’t need to retrain the model, but you need a way to get the information.

Trust LLM, as the EU project is called, aims to release one model a year for the next three years, and is also looking at ways to get the AI ​​to draw the right conclusions and understand context.

Libraries and Wikipedia

One of the languages ​​in the project is Swedish, which is based on the Swedish language model GPT-SW3.

– One of the questions is how we should collect more high-quality data. We have a dialogue with, for example, the Royal Library, he says and continues:

– Then I think that it is still Wikipedia and others that will become important sources of information.

The goal is for it to become the most powerful and reliable language models in Europe.

– I think that we will definitely see increased quality of the answers, says Heintz.

FACT Chatbots

A chatbot, or chatbot, is a robot that you can type with and that gives answers that resemble those that a human could give. They can be used as customer service, as a kind of personal assistant or as company.

Several chatbots are based on generative AI, which trains on large amounts of data and learns to identify patterns and structures. With that knowledge, the AI ​​creates new content. For example, it can be text, image, sound or video, similar to content created by humans.

Many companies have integrated the technology into their existing services, such as Microsoft and Google (search engines), as well as Snapchat (in the form of a controversial chat service). Some current ones are Open AI’s Chat GPT, Google’s Bard, Amazon’s Q and Inflection’s Pi.

Read more

afbl-general-01