The race to keep pace with AI advances

Robust guidance is needed to improve business certainty and confidence while keeping pace with AI advances. (Image by Gerd Altmann from Pixabay.)

The race is well and truly on for the definitive guide to safe and ethical AI adoption in business and wider society, writes Clare B Marshall.

In the glossary of its recently published briefing paper series, the World Economic Forum’s (WEF) AI Governance Alliance highlighted over 25 “contemporary definitions” to help frame discussion around AI terminology. A discussion paper on AI risk which followed the UK’s recent AI Safety Summit has a similarly lengthy glossary.

Efforts to galvanise a legal framework surrounding AI are also well underway seeing various (potentially competing) summits and declarations. The aforementioned UK-hosted AI Summit culminated in the Bletchley Declaration, where 28 countries signed up to a “wider global approach to understanding the impact of AI in our societies”. However, the follow-up discussion paper was billed by some commentators as light touch, a ‘direction of travel’ lacking mandates or clearly defined actions for change.

In the US, Executive Order (14100) pledged “new standards for AI safety and security” including a list of ’90 day actions’ (many of which are reported to be well under way), including the completion of “risk assessments covering AI’s use in every critical infrastructure sector” to help combat some of AI’s “biggest threats to safety and security”.

Furthest advanced is Europe which reached a major milestone in December last year following provisional agreement of the EU Artificial Intelligence (AI) Act. Said to be the first ‘clear set of rules’ for the use of AI, the final proposals address “definitions, scope, clarification of systems, enforcement, penalties, protection of rights and opportunity for innovation”.

The EU timetable anticipates final adoption next month with phased implementation to follow, beginning October 2024. The planned steps should first see a ban across the EU of “AI systems posing unacceptable risks”.

Getting on with it

Meanwhile, for businesses facing impacts that will be hugely transformational, many organisations are seeking out opportunities to work with AI whilst managing the risks, with some actively onboarding the role of chief AI/data officer roles to oversee, amongst other things, AI governance and proactive measures across their business.

Discussion papers continue to emerge with the OECD’s AI Principles offering recommendations for policy makers, with a focus on ‘trustworthy’ AI. The Alan Turing Institute provides a wealth of insight, research and news on advances in AI and is home to the AI Standards Hub which “aims to build a vibrant and diverse community around AI standards”.

For businesses embracing AI in their daily activities, products or services – and keen to develop a responsible ‘AI ecosystem’- the recently launched international standard, ISO/IEC 42001:2023, offers “specific requirements and guidance on establishing, implementing, maintaining and continually improving an AI management system” and seeks to contribute to a range of UN SDGs.

Short term self-governing

Assessing and managing AI risk impact across the spectrum of corporate functions can offer a safety-net in the short term and in preparation for more definitive regulations on the horizon. Commercial considerations can also be developed now around areas such as competition, intellectual property, procurement, products, insurance and liabilities alongside technology, cyber-security, people, resources and underlying matters of business integrity.

Whether appointing a chief AI officer or not, the value of having a digitally fluent board member driving AI vision and strategy throughout the business should be prioritised. Given a responsible AI ecosystem’s reliance on quality data, good data management practices could also be addressed.

An opportunity to influence

The EU AI Act looks set to provide a comprehensive framework and guidance improving business certainty, confidence and driving healthy momentum for keeping pace with AI advances. But during this time of rapid development and deployment, the infrastructure sector (like many other industries already cognisant to its benefits and threats) should also seek to influence more global clarity, consistency and strong governance.

As Europe’s current decisive action on AI is finalised and other legal frameworks are developed, consideration needs to be given to measures to ensure that the multidisciplinary and cross border project collaboration – typical of many infrastructure projects – can be upheld whilst conforming to new AI regulation. The WEFs AI Governance Alliance, with members from both Big Tech and the public and private sectors, suggests that a sensible balance of sector ‘best interests’ can be achieved. But how this will take shape and be implemented practically and consistently is yet to be seen.

Clare B Marshall is co-partner of the business consultancy 2MPy specialising in global business strategy.

A FIDIC webinar on 26 March 2024 looked at AI’s potential to transform the construction and infrastructure sector. Click here to read about the webinar and watch again.