{"id":58770,"date":"2024-06-19T12:03:31","date_gmt":"2024-06-19T10:03:31","guid":{"rendered":"https:\/\/blogs.dlapiper.com\/iptitaly\/?p=58770"},"modified":"2024-06-19T12:03:31","modified_gmt":"2024-06-19T10:03:31","slug":"the-council-of-europe-adopts-the-first-ever-international-treaty-on-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/blogs.dlapiper.com\/iptitaly\/2024\/06\/the-council-of-europe-adopts-the-first-ever-international-treaty-on-artificial-intelligence\/","title":{"rendered":"The Council of Europe Adopts the First-Ever International Treaty on Artificial Intelligence"},"content":{"rendered":"\n<p><em>by Giacomo Lusardi and Alessandra Faranda<\/em><\/p>\n\n\n\n<p><em>The Council of Europe adopted the first legally binding international framework convention aimed at ensuring respect for human rights, democracy, and the rule of law in the use of artificial intelligence (AI) systems in the public and private sectors. The Convention will be open for signature from 5 September 2024, even to non-European countries. It outlines a regulatory framework covering the entire life cycle of AI systems, from design to decommissioning, addressing risks, and encouraging responsible innovation.\u00a0\u00a0<\/em><\/p>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Toward a responsible AI governance<\/strong>&nbsp;<\/p>\n\n\n\n<p>The primary objective of the Convention is to ensure that the <strong>potential<\/strong> of AI technologies is exploited <strong>responsibly<\/strong>, respecting, protecting, and realizing the <strong>international community<\/strong>\u2019s values: human rights, democracy, and the rule of law. AI systems offer unprecedented <strong>opportunities<\/strong> but, at the same time, pose <strong>risks and dangers<\/strong> such as discrimination, gender inequality, undermining of democratic processes, violation of human dignity or individual autonomy, or even misuse by States for repressive purposes.&nbsp;<\/p>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Scope, definitions, and global approach. Emphasis on the life cycle of AI<\/strong>\u00a0<\/p>\n\n\n\n<p>The Convention\u2019s provisions focus on the <strong>life cycle<\/strong> of AI systems, considering its different <strong>phases<\/strong>, from conception and design to deployment, monitoring, and decommissioning. This concept is also central to the <strong>European Regulation on Artificial Intelligence (AI Act)<\/strong>, with reference, among others, to the obligations of transparency and adoption of a risk management system.\u00a0<\/p>\n\n\n\n<p>But what does &#8220;<strong>AI system<\/strong>&#8221; mean in the Convention? The Convention defines AI systems based not on the corresponding <strong>literal definition<\/strong> in the AI Act but on the one adopted by the <strong>OECD <\/strong>on 8 November 2023. The two definitions coincide in substance since they are based on the same <strong>key aspects of AI systems<\/strong>: variable autonomy and adaptability, capacity for inference, and generation of predictions, content, recommendations, or decisions that can influence physical or virtual environments. The choice of the OECD definition is oriented towards the need to strengthen <strong>international cooperation<\/strong> on the subject of AI and to facilitate efforts to <strong>harmonize its governance<\/strong> at the global level.&nbsp;&nbsp;<\/p>\n\n\n\n<p>The Convention <strong>does not aim to regulate all activities<\/strong> within the lifecycle of AI systems but only<strong> those that can interfere <\/strong>with human rights, democracy, and the rule of law. Thus, the Council of Europe\u2019s approach is peculiar as, <strong>unlike the AI Act<\/strong>, it does not make its objective scope coincide with specific AI models, systems, or practices but instead with the <strong>individual activities within the AI lifecycle <\/strong>and the impact they may have even <strong>irrespective of the risk<\/strong> the whole system presents.&nbsp;<\/p>\n\n\n\n<p>The Convention, in a comprehensive manner, regulates the use of AI systems in both the <strong>public<\/strong> and <strong>private sectors<\/strong>. Parties are mandated to adopt or maintain <strong>appropriate <\/strong>legislative, administrative, or other <strong>measures to<\/strong> <strong>implement its provisions<\/strong>. These measures are structured to be <strong>graduated and<\/strong> <strong>differentiated<\/strong> based on the severity and likelihood of occurrence of negative impacts on human rights, democracy, and the rule of law throughout the lifecycle of AI systems.&nbsp;&nbsp;<\/p>\n\n\n\n<p class=\"has-medium-font-size\"><strong>General principles in the AI life cycle<\/strong><\/p>\n\n\n\n<p>Following the first two chapters on general provisions and obligations, the third chapter of the Convention establishes a set of <strong>general principles<\/strong> to be implemented in accordance with national legal frameworks. These principles are formulated with a high level of <strong>generality so<\/strong> that they can be applied <strong>flexibly <\/strong>in various rapidly changing contexts.&nbsp;&nbsp;<\/p>\n\n\n\n<p>The <strong>first principle<\/strong> calls for measures to respect <strong>human dignity<\/strong> and <strong>individual autonomy<\/strong>. In particular, the use of AI systems should not lead to the <strong>dehumanization<\/strong> of individuals, undermining their ability to act autonomously or reducing them to <strong>mere data points<\/strong>. Furthermore, AI systems should not be <strong>anthropomorphized in<\/strong> a way that interferes with human dignity. A person&#8217;s autonomy is crucial to human dignity, as the <strong>ability to self-determine<\/strong>, make decisions without coercion, and live freely. In AI, preserving individual autonomy means guaranteeing people <strong>control<\/strong> over the use and impact of AI technologies without compromising their free choice. The <strong>principle of anthropocentricity<\/strong> also permeates the AI Act (that, among its objectives, aims \u2018<em>to promote the dissemination of an anthropocentric and reliable artificial intelligence\u2019<\/em>) and the Italian bill on AI that is now under review by the Italian Parliament.&nbsp;<\/p>\n\n\n\n<p>The <strong>second principle<\/strong> of the Convention focuses on the <strong>transparency and<\/strong> <strong>supervision<\/strong> of AI systems. This principle is also particularly relevant in the AI Act, with reference to high-risk and other AI systems. AI systems&#8217; inherent <strong>complexity<\/strong> and <strong>opacity<\/strong> necessitate robust supervision. AI systems&#8217; <strong>decision-making processes<\/strong> and overall functioning should be <strong>clear<\/strong> and accessible to all stakeholders. The Convention mandates adopting or maintaining measures to ensure <strong>transparency and monitoring <\/strong>tailored to specific contexts and risks, including identifying<strong> AI-generated content<\/strong>.&nbsp;&nbsp;<\/p>\n\n\n\n<p>When it comes to transparency, the aspects of <strong>explainability and<\/strong> <strong>interpretability<\/strong> are of utmost importance. The former necessitates <strong>clear explanations as to why an AI system provides certain information<\/strong> and produces specific predictions, content, recommendations, or decisions, particularly in sensitive areas such as healthcare, financial services, immigration, border services, and criminal justice. The latter refers to the ability to understand <strong>how an AI system makes predictions or decisions<\/strong>, that is the extent to which the output generation process can be made accessible and understandable to non-experts in the field. However, it&#8217;s crucial to acknowledge that <strong>information disclosure<\/strong> could potentially conflict with privacy, confidentiality and trade secrets, national security, and the rights of third parties. Therefore, a fair <strong>balance <\/strong><strong>should<\/strong> be struck in implementing the principle of transparency, taking into account all these factors.<\/p>\n\n\n\n<p>Supervision, a crucial element in the ethical <strong>use of<\/strong> AI systems, refers to the various mechanisms and processes that <strong>monitor and guide<\/strong> their lifecycle activities. These mechanisms can take the form of legal, policy, and regulatory frameworks, recommendations, guidelines, codes of conduct, audits and certification programs, error detection tools, or the involvement of supervisory authorities. The Convention recognizes the importance of these mechanisms in ensuring the responsible development and deployment of AI systems.&nbsp;<\/p>\n\n\n\n<p>Accountability <strong>and responsibility,<\/strong> a cornerstone <strong>principle <\/strong>of the Convention, is a vital aspect in the ethical use of AI systems. It necessitates the establishment of <strong>mechanisms<\/strong> to hold organizations, entities, and individuals involved in the lifecycle of AI systems accountable for any negative impacts on human rights, democracy, or the rule of law. This principle is <strong>closely intertwined<\/strong> with transparency and supervision, as their mechanisms <strong>enable<\/strong> a clearer understanding of how AI systems work and how they produce their outputs, thereby facilitating the exercise of accountability.&nbsp;<\/p>\n\n\n\n<p>The Convention goes on focusing on four other equally important principles: <strong>equality and non-discrimination<\/strong> (so it lists several normative references to be considered and the various biases that might characterize AI systems), <strong>protection of personal data<\/strong>, <strong>reliability based on<\/strong> technical standards and measures in terms of robustness, accuracy, data integrity and cybersecurity, and <strong>secure innovation<\/strong> in controlled environments (e.g., regulatory sandboxes).&nbsp;<\/p>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Remedies, procedural safeguards, and risk management. Possible moratorium for AI systems<\/strong>&nbsp;<\/p>\n\n\n\n<p>As regards remedies, the Convention requires parties to apply their <strong>existing regulatory regimes<\/strong> to activities within the AI system lifecycle. For these remedies to be effective, it provides for the <strong>adoption or maintenance of specific measures<\/strong> aimed at documenting and making certain information available to people concerned and ensuring the <strong>effective possibility of complaining to the competent authorities<\/strong>.&nbsp;<\/p>\n\n\n\n<p>Transparency and user <strong>awareness are key in<\/strong> the interaction with AI systems. The Convention underscores this by requiring that those <strong>interacting with AI systems<\/strong> be informed precisely that they <strong>are interacting with an AI system<\/strong> and not a human being. This emphasizes the necessity of transparency and user <strong>awareness<\/strong> in the AI system interaction.&nbsp;&nbsp;<\/p>\n\n\n\n<p>There is also a provision concerning the need to identify, assess, prevent, and mitigate ex-ante and <strong>iteratively<\/strong> (where necessary) potential risks and impacts to human rights, democracy, and the rule of law throughout the AI system lifecycle by developing a <strong>risk management system<\/strong> based on concrete and objective criteria. The Convention also requires parties to assess the need for <strong>a moratorium<\/strong>, <strong>bans<\/strong>, or other appropriate measures about AI systems that are <strong>incompatible with<\/strong> respect for human rights, democracy, and the rule of law, leaving <strong>parties free<\/strong> to define the concept of incompatibility as well as the scenarios requiring such measures.&nbsp;&nbsp;<\/p>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Implementation, effects, and entry into force of the Convention<\/strong>&nbsp;<\/p>\n\n\n\n<p>Implementing the Convention requires due consideration of the <strong>specific needs and vulnerabilities<\/strong> of people with <strong>disabilities<\/strong> and <strong>children<\/strong> and the promotion of <strong>digital education<\/strong> for all population segments.<\/p>\n\n\n\n<p>Parties are free to apply <strong>previous agreements or<\/strong> <strong>treaties<\/strong> relating to the lifecycle of AI systems covered by the Convention, but they must adhere to the Convention\u2019s <strong>goals<\/strong> and <strong>objectives and<\/strong> not assume conflict obligations.<\/p>\n\n\n\n<p>As of <strong>5 September 2024<\/strong>, the Convention will be open for signature not only by the <strong>Member States of the Council of Europe<\/strong> but also by <strong>third countries<\/strong> that contributed to its drafting, including Argentina, Australia, Canada, Japan, Israel, the Vatican City State, and the USA, as well as Members of EU. Once in force, other non-member States may be invited to join. The Convention <strong>will enter into force<\/strong> on the first day of the month following the expiration of a period of three months from the data in which <strong>at least five signatories<\/strong>, including a minimum of three Member States of the Council of Europe, have expressed their consent to be bound.&nbsp;&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>by Giacomo Lusardi and Alessandra Faranda The Council of Europe adopted the first legally binding international framework convention aimed at ensuring respect for human rights, democracy, and the rule of law in the use of artificial intelligence (AI) systems in the public and private sectors. The Convention will be open for signature from 5 September [&hellip;]<\/p>\n","protected":false},"author":606,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_s2mail":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[1],"tags":[],"class_list":["post-58770","post","type-post","status-publish","format-standard","hentry","category-general"],"_links":{"self":[{"href":"https:\/\/blogs.dlapiper.com\/iptitaly\/wp-json\/wp\/v2\/posts\/58770","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.dlapiper.com\/iptitaly\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.dlapiper.com\/iptitaly\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.dlapiper.com\/iptitaly\/wp-json\/wp\/v2\/users\/606"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.dlapiper.com\/iptitaly\/wp-json\/wp\/v2\/comments?post=58770"}],"version-history":[{"count":0,"href":"https:\/\/blogs.dlapiper.com\/iptitaly\/wp-json\/wp\/v2\/posts\/58770\/revisions"}],"wp:attachment":[{"href":"https:\/\/blogs.dlapiper.com\/iptitaly\/wp-json\/wp\/v2\/media?parent=58770"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.dlapiper.com\/iptitaly\/wp-json\/wp\/v2\/categories?post=58770"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.dlapiper.com\/iptitaly\/wp-json\/wp\/v2\/tags?post=58770"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}