Based on an interview with Carlos Amaral, Co-Founder and CEO of Priberam

Many industries, especially the media, are enthusiastic about using machine learning (ML) to enhance the analysis of large datasets. Annotated data is essential for machine learning and artificial intelligence (AI) training. While incorrectly annotated, poorly chosen, or incorrectly annotated data does not accurately represent the world in which the model will operate, it may perform poorly or even harm the model. In reality, poor judgments may have adverse effects and even hurt others. Also, there is a problem with “black box” decisions making (read more here).

Given that it is being applied in an increasing variety of applications with substantial consequences for society, AI must be trustworthy. Examples of industries where AI is employed include healthcare, transportation, finance, and many others, where it can significantly improve people’s lives. If AI is unreliable, it may result in unfavorable outcomes like inaccurate diagnosis, accidents, or financial losses.

Therefore, in the future, for AI to be able to mimic and replicate what humans did before, it is also necessary to consider the ethics and prevent possible failures which could emerge. A “Responsible AI” initiative is contributing to resolving these difficulties.

The Importance of the Responsible AI

The term “responsible AI” refers to creating and applying AI systems that are ethical, transparent, and accountable. Following this objective is becoming an issue increasingly as AI becomes more pervasive and potent in our society. Several criteria and goals have been proposed to ensure that AI is developed and used responsibly.

Promoting fairness, transparency, and accountability is one of the key principles for responsible AI. This means that users, clients, and society should all be treated fairly while designing and utilizing AI systems. Additionally, AI must be open about how it makes decisions and take accountability for any unfavorable effects that may arise.

Making sure AI is helpful to humanity is another aim of responsible AI. AI should be applied to enhance human life and address significant societal issues rather than harm or introduce new problems. AI must be created and used in a way consistent with ethical standards and human values to fulfill this objective.

Using AI responsibly can promote science in a variety of ways. For instance, it can assist in identifying and resolving any biases in data and algorithms, enhancing the reliability and accuracy of AI systems. Additionally, it can contribute to ensuring that AI is applied accountable and transparently, which can boost public confidence in the technology.

SELMA’s Contribution

Priberam, a core member of the SELMA project, is already actively involved in the collaborative initiative on Responsible AI and participated in the Responsible AI Forum meeting. Leading technology companies, research institutions, and startups from around the world come together at the Responsible AI Forum intending to create responsible artificial intelligence technologies—that is, AI technologies that are fair, transparent, privacy-preserving, and energy-sustainable. Strategic economic areas like life sciences, tourism, fintech, and e-commerce will change due to these developing technologies in Portugal and throughout Europe. The most recent meeting was held in January 2023.