AI – the good, the bad and the downright criminal

0

Being highly alert to the prolific rise of AI, its impacts and consequences, is now crucial for everyone everywhere. But with cyber-crime and ‘deepfake’ increasingly prevalent, are current measures capable of mitigating the negative aspects and impacts of AI. Clare B Marshall considers the issues.

Whether in our personal or business lives, being highly alert to the prolific rise of AI, its impacts and consequences, is now crucial for everyone everywhere. But, with cyber-crime and ‘deepfake’ increasingly prevalent, are current measures capable of mitigating the negative aspects and impacts of AI and how can business reduce the risk of fraudulent attacks, industrial sabotage or damage to reputation? How can a much-needed framework for digital safety and security be achieved globally?

So, let’s consider the measures which are required to counter threats, whilst also recognising the value which can be brought by AI.

Imagine surfing the internet for a gift, delighted to find some vintage vinyl. It’s rare (and good value). But doubt creeps in. Too good to be true? Do you purchase, no questions asked, or take a minute to reflect, perhaps undertake simple due diligence on the vendor?

Sadly, as consumers, this process is one we must increasingly go through – and even following a degree of due diligence can find ourselves caught out. Accustomed as we are to scams from nefarious cold calls or ‘phishing’ emails, sophisticated faux websites and social media profiles are becoming widespread as technology advances exponentially. Whilst there is much good in AI, the threat of emerging technologies, the ‘bad’ (often criminal) activities, are having huge impact and need to be recognised and carefully managed.

Take the recent video of three European premiers, en route to a meeting in Ukraine. It received 85 million views after commentators suggested drug-use paraphernalia was visible at the meeting. Fact-checking sites, among them BBC Verify and Euro Verify, investigated, finding the allegations to be false and suggesting a politically motivated disinformation campaign by social media users.

Every corner of society is impacted by situations such as these. Trust comes into question, impacting thinking and decision-making across our working and personal lives.

A range of unlawful and harmful activities are now widespread ranging from national security incidents to deepfake fraud and privacy violations.

Individuals, organisations, schools, governments, universities, retailers and wider business are all at risk. The real impacts are wide-ranging, from major multi-million financial scams, to crippling cyber-attacks on systems and infrastructure causing business to be disrupted, if not frozen, until significant funds are transferred. Associated reputational damage can be significant, adding to the real cost impact of such events.

Currently, (often state-sponsored) criminal gangs are central to these activities, with cryptocurrencies sometimes enabling them. Combined with forecasts of AI itself developing a capability to commit criminal acts (without the need for human intervention), regulating for digital safety and security continues to create major challenges for lawmakers. And, with global competition on AI development and geopolitics influencing the approach, a willingness to achieve global regulation (with bite) seems some time away.

Counter measures by those most able to control

Meanwhile, are tech companies and AI developers doing enough to prevent technology adoption by bad actors? Reports suggest some measures to monitor how tools are used. This includes the publication of misinformation “policy rationales” aligned with transparency transparency and pushing out safety protocol reminders to users. Pausing ‘prompt’ functionality within generative AI platforms (where suspicious activity is detected) is viewed as effective, as illustrated when a researcher created a false passport using ChatGPT.

Some public figures have been more active about trying to drive change, challenging tech giants on their willingness to stop (or even slow down) the dissemination of fraudulent deepfake scams. Even a leading journalist, supported by his internationally renowned news organisation, has struggled to end the constant game of “whack-a-mole”.

Several tech giants claim to be taking proportionate steps and counter measures counter measures including facial recognition, detection tools and labelling content generated by AI. Whether this application of ‘good’ technology to counter ‘bad’ is sufficient remains to be seen.

Collaborative efforts for better regulation are evident. The World Economic Forum’s Global Coalition for Digital Safety is seeking ways to reduce “harmful content and conduct” (including misinformation and exploitation) and promoting “responsible practices”. Its 40 plus global members include government, academia and tech industry leaders.

Seasoned AI practitioners are also publicly recognising the potential threat of ‘AI agents’ following reports that a non-profit is being created to help drive ‘honest’ AI development that can “spot rogue systems attempting to deceive humans”.

Towards a global regulatory framework

Whilst making headway with global regulation is fraught with challenges, regulating tech developers, deterring bad actors and imposing sanctions where necessary is being tackled at a national level in many countries and across the EU.

The EU’s Digital Services Act (DSA) has a clear remit to help “prevent illegal and harmful activities online and the spread of disinformation” and the UK’s Online Safety Act 2023 (which is being implemented in three phases) is anticipated to be fully implemented by 2026. Providing a set of laws to protect both children and adults, the act applies to all online platforms which “have links to the UK” or have “a significant number of UK users”.

The European Commission has developed guidelines on prohibited practices to help limit AI-enabled activities which might manipulate, deceive or exploit vulnerabilities. Failure to comply could result in significant penalties. In time, the EU’s AI Office could play a key role enforcing rules related to AI systems development and deployment, including regulation, compliance, safety and policy coordination.

China has created a regulatory system to tackle deepfake specifically. This includes Provisions on the Administration of Deep Synthesis of Internet Information Services, published in 2023 and, in recent months, Labelling Measures for Content Generated by Artificial Intelligence together with a mandatory national standard, Cybersecurity Technology – Methods for Content Generated by Artificial Intelligence (effective September 2025).

All commendable steps. But without a strong, geopolitical will to embed digital safety and security, criminal gangs and their ever-evolving AI toolkits have the wherewithal to stay steps ahead.

How can our actions have impact?

In the UK alone, latest government figures show that 75% of big business has been hit by some form of cyber breach or attack. Whether large or small, it is in the interests of business and the communities we engage with to play a role.

Whilst we may not be able to counter the full wave of threats presented, we can implement important measures to head-off these risks. This involves strong risk management foundations with a robust structure of governance, checks and balances and good communication. I’ve outlined some key steps to improving digital safety and security below.

1. Strong governance 

  • Adopting a long-term lens for planning and leadership vision across the ever-evolving range of technological risk.
  • Appointing key oversight roles – such as Chief AI Officer, Chief Information Officer, Chief Digital Safety Officer.
  • Keeping AI threats and opportunities on the agenda, from Board meetings to project status meetings.
  • Being prepared to troubleshoot, enabling nimble operations and effective response.
  • Reviewing currency of information security arrangements on an ongoing basis.
  • Ensuring the integrity and reliability of technology systems and office infrastructure.
  • Creating a culture of digital risk awareness and ensuring that staff codes of practice and conduct are highly visible to all employees and clearly understood.
  • Championing good, two-way communication which encourages openness and an open-door policy for employees to flag concerns should they arise.
  • Analysing outcomes against safety and security plans and using insights to improve measures.

2. Going back to basics

  • Paying attention to the detail of daily operations and service delivery.
  • Investing in training in AI – both on the epic opportunities (whether in business, design, advisory, construction) and on digital threats and how to manage them.
  • Verifying data relied upon using established tools.
  • Checking any AI-generated material (whether clearly labelled or not).
  • Undertaking reviews and reality checks of key information.
  • Following the rule that four eyes are better than two – seek a second, or third opinion.
  • Establishing protocols for paper-based reviews of key risk areas – such as financial transfers and major transactions.
  • Implementing strong approvals processes.
  • Encouraging questioning within the organisation and not just seeing things on face value. A book cannot always be judged by its cover, virtual or otherwise.

3. Being paranoid can be good (in moderation)!

  • Probing and encouraging suspicion.
  • Being confident in doing reality checks.
  • Verifying the provenance of something (or someone).
  • Being alert to ‘hallucinations’ or fake images, messaging, voice or vision.

4. Reporting (internally and externally)

When experiencing a scam, report the incident to a relevant enforcement authority. For example, the UK’s National Cyber Security Centre (NCSC)not only allows individuals and businesses to report a cyber incident directly, but the public can also report scam activities such as suspicious emails, texts and websites or a “vulnerability with a UK government online service”. Similarly in the US, the Internet Crime Complaint Center (IC3) invites the public to file a complaint under the banner of: “the only way forward is together”.

5. Sharing learnings and working together

Anyone who has been ‘scammed’ will know how distressing it can be. In business these events are amplified, seeing major economic impacts as well as high stress levels for all involved, often over extensive time periods. As M&S CEO, Stuart Machin, recently said: “It’s in the pit of your stomach, the anxiety”.

Many, for obvious reasons, are reluctant to share their experiences. Optimism bias may also be prevalent with some feeling they’re not at risk (or the experience was isolated) and therefore no action or lessons-learned required.

Sharing stories of what can happen to any of us is important, reducing the possibility of repeating the same experience and helping others stay alert and avoid falling into the same trap. Engaging with legislators on best practice and lowering the risk of a deepfake scam or major attack benefits everyone.

So (in the spirit of sharing), back to my vintage vinyl tale from earlier. This was, from my experience, too good to be true. A lesson learned and a reminder that not all things are what they seem however convincing, or real, they might at first appear.

Clare B Marshall is co-partner of the business consultancy 2MPy specialising in global business strategy.