Challenging Bias in Big Data user for AI and Machine Learning

Project: Research

Project Details

Description

CHARLIE is an ERASMUS+ KA2 project with an implementation period of 30 months, between 30/12/2022 - 29/06/2025. The project is being conducted by a consortium of SIX (6) partners from five (5) European countries: Spain, Portugal, Romania, Finland and Denmark.
Artificial Intelligence (AI) is in our everyday life. From the algorithm that recognizes our face when we are walking down the street feeding police biometric security services to the algorithm that chooses the advertising we will see in our social media, AI is everywhere. But although Machine Learning (ML) and AI are mathematics they are not always right and this happens because the data that is processed to come to any conclusion can be, and often is, biased. Social sciences have been studying Human Bias for many years. It arises from the implicit association that reflects bias we are not conscious of and that can result in multiple negative outcomes. AI and ML are not designed to make ethical decisions, that is not an algorithm for ethics. It will always make predictions based on how the world works today, therefore contributing to fostering the bias and discriminatory practices that are systemically rooted in our societies today. With the widespread of AI and ML technologies, often owned by big tech companies with the only objective of making profits, that is an urgent need to bring a human-centered approach to tech and using it to solve social problems instead of contributing to them. In its Communication of 25/04/18 and 7/12/18, the EC set out its vision for artificial intelligence, which supports “ethical, secure and cutting-edge AI made in Europe”. AI systems need to be human-centric, resting on a commitment to their use in the service of humanity and the common good, with the goal of improving human welfare and freedom. While offering great opportunities, AI systems also give rise to certain risks that must be handled appropriately and proportionately. We now have an important window of opportunity to shape their development.
CHARLIE intends to ensure that we can trust the sociotechnical environments in which they are embedded. We also want producers of AI systems to get a competitive advantage by embedding Trustworthy AI in their products and services. This entails seeking to maximise the benefits of AI systems while at the same time preventing and minimising their risks. HE, AE and Youth require new and innovative curricula that can meet this skills gap and that can equip learners with the knowledge and skills to contribute to a more ethical approach to tech development. The need to make tech education more human is aligned with the Digital Education Action Plan that includes specific actions to address the ethical implications and challenges of using AI and data in education and training.

Key findings

Competency Matrices
"Algorithmic Bias" course (HE)
"Algorithmic Bias toolkit for synchronous sessions"
A guideline for boosting the capacity of university administrators/management
Webinars
Self-paced "Ethical AI" microcredential
Digital Serious game
Toolkit to support Adults and Youth in upskilling into Ethical AI.
Policy recommendation
AcronymCHARLIE
StatusActive
Effective start/end date30/12/202229/06/2025

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.