Artificial intelligence and children: ethical considerations

image

When children engage in dialogue with artificial intelligence (AI), we should take into consideration not only the legal aspects (observance of their rights), but also the ethical aspects.

IT specialists and international legal experts participated in a webinar within i-Access MyRights project, on 14 May, where they discussed AI technologies benefits and risks for vulnerable children who get online in order to ask for help. This topic was taken into discussion because the project team works on an application to support these children.   

It will be a GPT type application which will facilitate children’s access to information in a friendly way, in accordance with their age, it will help them know their rights and services for children in the respective country, it will support psychologically the children victims of abuses and also children in conflict with the law.

A GPT chatbot designer has numerous responsibilities, and transparency is one of them, the  speakers showed. Children should be informed from the start that they interact with an AI system, not with a human being.  

There are many ethical risks, the IT experts explained. When a GPT chatbot gets into conversation with a user, it becomes a Conversational Agent or a Companion Chatbot. This  companion chatbot has a huge potential because you can engage in dialogue with it easier than with a person, it can offer psychological support, counselling.    

When a GPT chatbot imitates human behaviour, vulnerable children may perceive it as a human being. They may attach to it emotionally, they may become addicted to it. Companion AI chatbots are problematic, showed Victoria Hendrickx, researcher at KU Leuven Centre for IT and IP Law in  Belgium. ”We took into account several protection measures when creating this chatbot. Our chatbot has a fixed framework of possible answers. We designed it so that it cannot show too much  compassion or too much emotion. Thus, we eliminate the risk of emotional attachment (...) It is designed to have no memory, to delete conversations, therefore no further questions can be asked.” It is important that AI companion chatbots do not behave like a human being, the experts concluded. The purpose of a chatbot is only to inform users and provide answers.

Another risk, besides the emotional impact: these agents may collect too much personal data about the user. This data can be used to manipulate the user in many ways: socially, economically, emotionally, Victoria added.

Designer’s bias is an ethical danger. If the input is biased, the output will be biased as well  (according to “garbage in, garbage out” principle). Therefore, the designer must be very careful about what he introduces in the system.

Another ethical consideration discussed at the webinar: AI companion chatbots may generate an inaccurate, fictional, but highly realistic outputs. That is called hallucination. ”For instance, in our case, the chatbot could hallucinate and make up a helpline, like a phone number. And that’s really problematic if you want to help children in vulnerable positions. In order to make sure that our chatbot will not hallucinate, we have created a set of measures such as coherence checks, reference to the source of information, so that the answers can be checked by a human being.” (Victoria Hendrickx)

The European Commission has proposed and is debating an ”Artificial Intelligence Act”, but this document does not cover all risks of AI companion chatbots, and it mentions nothing about observance of children’s rights, the webinar participants emphasised.

The question is whether these technologies should be banned, given the risk factor for children. The experts’ conclusion was that we should not ban them, but make sure they respect  children’s rights and we should protect children in this ever-evolving digital environment.

The webinar took place within i-Access My Rights project - “artificial intelligence driven support for a smart justice with children in Europe”, implemented in Romania by Terre des hommes Foundation and co-funded by the European Union.