In today's world, Artificial Intelligence (AI) is widespread and used in many different areas, such as the tech industry, financial services, health care, retail and manufacturing to name just a few. The main drive behind the surge of AI applications is its ability to extract useful information from very large data.
Despite the incredible positives AI has brought in recent years, it has also sparked numerous doubts about its trustworthiness. Some of the issues flagged include the lack of understanding of the algorithms used, in many cases described as black boxes. Similarly, it is often unclear what sort of data is applied in the training process of the AI system. Since AI systems learn from the data it is provided, it is crucial that this data does not contain biased human decisions or reflect unbalanced social biases.
To address these and many more trust issues in the emerging AI systems, the European Commission appointed the High-Level Expert Group on IA, and in 2019 this group presented Ethics Guidelines for Trustworthy AI. The outcome of these guidelines is that trustworthy AI should be lawful, ethical and robust and this should be achieved by addressing the following 7 key requirements:
- Human Agency Oversight - allowing humans to make informed decisions and foster their fundamental human rights, while also ensuring proper human oversight of the AI system.
- Technical Robustness and Safety - AI systems need to be safe, accurate, reliable and reproducible.
- Privacy and Data Governance - respecting user privacy alongside ensuring the quality and integrity of the data.
- Transparency - Ai transparency is achieved through the explainability of the AI systems and their decisions.
- Diversity, non-discrimination and fairness - The AI system must avoid unfair bias while being accessible to all.
- Societal and Environmental well-being - it must be ensured that the AI system is sustainable and environmentally friendly.
- Accountability - accountability and responsibility for AI systems as well as their outcomes must be ensured.
In SEDIMARK it is our goal to develop cutting-edge AI technology such as machine learning and deep learning to enhance the experience of its users. In our path to this discovery, we aim to follow Trustworthy AI guidelines throughout the lifecycle of our project and beyond so that the AI developed and used in this project can be fully trusted by its users.
The SEDIMARK team in the Insight Centre for Data Analytics of University College Dublin (UCD) aims to exploit Insight’s expertise to promote ethical AI research within SEDIMARK and help the rest of the partners towards ensuring that the AI modules developed within the project follow the Ethical AI requirements.