In support of the Newsroom team of the General Secretariat of the Council of the EU, ML6 evaluated the technical feasibility and potential ethical risks involved in using an AI-based solution for annotating media assets. ML6 used their risk assessment methodology to assess the key ethical challenges based on the 7 dimensions of Trustworthy AI and give recommendations to mitigate potential risks already in the design of a technical solution. With these recommendations in mind, the EU Council Newsroom team will be able to make informed decisions about the design and implementation of an AI-based solution for their Newsroom.
At ML6, we believe that assessing AI solutions systematically for ethical risks is an important part of AI design and development and should be part of any AI project. With this project, the EU council Newsroom team role-models the practice of doing such assessments already in an early stage.
The European Council and the Council of the European Union are two of the main institutions of the European Union (EU). The European Council is composed of Heads of state or government of the 27 EU member states, the European Council President and the President of the European Commission, and defines the political direction and priorities of the EU. The Council of the EU is composed of national government ministers from each member state. The Council negotiates and adopts EU laws, in most cases together with the European Parliament. The General Secretariat of the Council serves both of these institutions. Via the Newsroom website, this institution provides free of charge access to high-quality video, image and audio files covering all significant activities of the EU Council, such as roundtable discussions and press conferences.
To increase efficiency, free up time of the EU Council Newsroom and allow them to focus on more valuable tasks, the team is considering the use of an AI-based system to automate the process of annotating media files. Seeing the importance of and need for building ethical and trustworthy AI applications, the EU council wanted to be a forerunner in accompanying the technical design of an AI solution with an in-depth ethical assessment.
During the conceptual phase of the project, significant emphasis was placed on identifying possible ethical risks of an AI-based system and the EU Council team worked closely with ML6 to gain technical and ethical recommendations. These included considerations such as the safety and the technical robustness of the system, data privacy and governance, fairness and diversity. These findings served to guide possible solutions and design choices, ensuring potential obstacles in the future can be averted.
In particular, when it came to ethical recommendations, we evaluated potential ethical risks based on our risk assessment methodology using the seven dimensions of trustworthy AI as defined by the EU high-level expert group on AI. Trustworthy AI describes AI that is lawful compliant, ethically responsible and technically reliable. The concept is grounded on the premise that AI will reach its full potential only when trust can be established throughout its entire lifecycle, from conception and development to deployment and usage. One approach to achieving trustworthy AI is by aligning the design and application of AI with ethical and inclusive values and striving to maximise its benefits while simultaneously identifying, preventing and mitigating potential pitfalls.
The assessment process started with a series of interviews with members of the Newsroom team, which provided valuable insights into their daily tasks and processes. These interviews have been followed by an in-depth technical and ethical assessment of potential solutions. In addition, the ML6 team also carried out various technical experiments to gather a deeper understanding of the topic at hand. All findings were summarised in a report, serving as documentation of potential risks and risk mitigation strategies.