New Tool to Assess the Impact of AI Systems on Human Rights
|New Tool to Assess the Impact of AI Systems on Human Rights
The methodology provides for the creation of a risk mitigation plan to minimise or eliminate the identified risks, protecting the public from potential harm.
A new Council of Europe tool provides guidance and a structured approach to carry out risk and impact assessments for Artificial Intelligence (AI) systems.
The HUDERIA Methodology is specifically tailored to protect and promote human rights, democracy and the rule of law. It can be used by both public and private actors to help identify and address risks and impacts to human rights, democracy and the rule of law throughout the lifecycle of AI systems.
The methodology provides for the creation of a risk mitigation plan to minimise or eliminate the identified risks, protecting the public from potential harm. If an AI system used in hiring, for example, is found to be biased against certain demographic groups, the mitigation plan might involve adjusting the algorithm or implementing human oversight.
The methodology requires regular reassessments to ensure that the AI system continues operating safely and ethically as the context and technology evolve. This approach ensures that the public is protected from emerging risks throughout the AI system’s life cycle.
The HUDERIA Methodology was adopted by the Council of Europe’s Committee on Artificial Intelligence (CAI) at its 12th plenary meeting, held in Strasbourg on 26-28 November.
It will be complemented in 2025 by the HUDERIA Model, which will provide supporting materials and resources, including flexible tools and scalable recommendations.
Courtesy: Council of Europe