Tuesday, 15 April 2025
Curb your (Artificial) Intelligence

The ambition for 2025 is to develop a format that gives individuals more control over the use of their personal data in AI environments. Decisions that are based solely on automated processing of personal data, including profiling, may have legal effects or may similarly significantly affect an individual or a group of persons. It is our ambition to contribute to an efficient and effective solution of problematic debts with the help of AI that is (co-)trained by persons who want to move on with their lives by solving their debt issues, without revealing their identities to an assistant.
What is at stake?
The rapid developments in the area of artificial intelligence (AI) have resulted in data processing practices which are beyond human control. Complicated self-learning algorithms using many parameters and being trained with vast amounts of data. There is a genuine risk that those algorithms inherit inaccuracies and biases present in the data they are trained in. This may lead to outcomes that have serious consequences for human beings. For this reason, it is important that the data that is fed into AI systems is accurate and complete and that there is human intervention before any decisions are taken on the basis of the outcome of AI-based recommendations: a second opinion by a human being should be required. Simultaneously, the unsolicited capturing of data for algorithm training purposes has increased exponentially. Big tech companies do no longer hesitate to capture and store all data generated by the users of their products and services. This includes every keystroke screenshots (every 5 seconds). Therefore, also passwords, bank account numbers, social security numbers, health data are captured. Using encryption on local devices doesn’t help, as the capturing takes place before encryption, on the basis of client side scanning technology. Big tech equals mass surveillance.
The solution
- Transparency: It should be clear which data is fed from which sources to an AI system for analysis.
- Trust and reliability: The data fed into an AI system should be accurate and complete. Therefore, before making the data available to an AI system, the data should be validated by the original source, for example and annotated, by adding metadata, annotations and/or time stamps.
- Second opinion: Before taking any decisions on the basis of the output of automated processing, a human being with expert knowledge and common sense should sign off on the results and/or make recommendations in regard to a decision in the case where that decision is proposed by an AI-system.