Hogan Lovells 2024 Election Impact and Congressional Outlook Report
This month's 5 need-to-knows in the space of digital transformation, ethics and law:
“We are here to bring multilateralism back from the brink”, said the UN Secretary-General António Guterres. His message captures the overall tenor of the “Summit of the Future”, which gathered world leaders on the 22nd of September in the General Assembly Hall at the United Nations headquarters in New York City to agree on a landmark “Pact For the Future”. The Pact is an ambitious document consisting of 56 Actions that promise to steer a prosperous path through a plethora of challenges that all nations of the world are called upon to face: peace and security, global governance, sustainable development, climate change, digital cooperation, human rights, gender, youth and future generations. Importantly, the Pact also includes an Annex entitled “Global Digital Compact”, which sets out key goals and principles in the sphere of deep digital technologies, including international cooperation to bridge all digital divides, tackle concentrations of technological capacity and market power, and step up international governance of AI.
Around the same time, the World Economic Forum published a white paper warning of the growing need for more efficient ways to stem the tide of trade policy fragmentation. Focusing its analysis on the current status of generative AI regulation and its impact on international trade, the paper states that “national efforts, driven by varying priorities, have led to fragmented and divergent requirements that are likely to create cross-border trade frictions and undermine governmental objectives, creating barriers to the use of GAI.” A significant number of countries have either initiated or already passed domestic legislation to regulate AI, including the US, the European Union, Canada, Singapore, China. In addition to domestic legislation, there are currently 6 bilateral agreements with provisions on AI and 600 regulatory developments targeting AI providers. All this patchwork of AI regulation raises concerns about its overall impact on the production, use, and dissemination of GAI across national borders without a level playing field of regulation to fall back on. Efforts need to be doubled down on streamlining a homogeneous regulatory design around key aspects of GAI.
Concern with the regulation of AI was also at the heart a recent OECD AI Paper published this month, albeit with a sharp focus on the financial service industry. The paper brings telling results from a recent survey in which the majority of the respondents, including banks, insurance and securities firms, asset managers, and financial regulators from 49 OECD and non-OECD jurisdictions, reported that while they have adequate regulation in place there may still be some gaps to close, and so more specific guidance on the sectoral regulation of AI in finance is desirable. This is owing to the fact that the existing financial regulations and laws are tech-neutral and apply regardless of the technology used. Although the majority of the respondents surveyed do not want to see new regulations around the use of AI in finance in the near future, there may be a need for further stocktaking of the financial policy frameworks to make sure they remain fit for purpose. In the meantime, closer coordination, information sharing, and sharper alignment between regulators will certainly help to address emerging risks.
Meanwhile in Brussels, over 100 multinationals and small and medium-sized companies from various sectors, including IT, telecoms, healthcare, banking, automotive, and aeronautics, signed an AI Pact spearheaded by the European Commission with voluntary pledges by key industry stakeholders to pave the way for a smooth implementation of the AI Act. The Pact signals the industry’s voluntary commitment to three core actions, including an AI governance strategy to foster the uptake of AI in the business organisations, the mapping of high-risk AI systems, and the expansion AI literacy. The text of the pledges was initially drafted by the AI Office and then shared with the relevant stakeholders in the AI Pact network to gather feedback and insights. These pledges are not legally binding but are meant to ready signatories for the implementation of the AI Act. Their progress in living up to these commitments will be publicly reported in twelve months’ time.
A bit earlier in the month, Věra Jourová, the European Commission Vice-President for Values and Transparency, signed the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law on behalf of the EU. This took place during an informal conference of Ministers of Justice of member states of the Council of Europe in Vilnius, Lithuania. Negotiations around the Convention had been trundling on since September 2022 under the auspices of the Committee on AI (CAI), established by the Council of Europe in Strasbourg. The final text builds on extensive input from over sixty international representatives from civil society, academia, industry, and other international organisations. Overall, the Convention is broadly consistent with the AI Act and reflects a number of its key concepts, including a keen focus on human-centric and trustworthy AI based on transparency, robustness, safety, and support to safe innovation through regulatory sandboxes, risk management obligations, oversight mechanisms, and more. To wrap up this Convention, the European Commission is now expected to put together a proposal for a Council decision.
Authored by Leo von Gerlach & Julio Carvalho