About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Data Warning After UK Signals New Law Covering AI Use

Subscribe to our newsletter

Financial institutions operating in the UK must begin ensuring the integrity of their data estates after the newly elected government signalled plans to forge a potentially far-reaching AI bill.

Leaders of two large data management companies said that any new technology law could usher powers of intervention if AI models and processes are seen as likely to cause danger to individuals or companies. Only with robust data management setups would organisations be able to ensure they don’t breach any new law.

Greg Hanson, group vice president and head of EMEA North sales at Informatica, and Arun Kumar, UK regional director at ManageEngine, offered their thoughts after the government of new Prime Minister Kier Starmer revealed its legislative programme for the next parliament.

While the announcement made no mention of a full AI bill, the plans revealed in the King’s Speech made by King Charles at the opening of parliament last week, said the UK will seek to “establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”.

Bad Data, Bad Outcomes

“Businesses must now brace for greater intervention and be prepared to demonstrate how they are protecting the integrity of AI systems and large language models,” said Hanson. “Developing robust foundations and controls for AI tools is a good starting point.”

Hanson echoed a view common among technologists and users that AI can only be useful if it is fed good data. Without that, downstream processes will be erroneous and potentially catastrophic to workflows and operations.

“Bad data could ultimately risk bad outcomes, so organisations need to have full transparency of the data used to train AI models,” he added. “And just as importantly, businesses need to understand the decisions AI models are making and why.”

With more institutions putting critical parts of their data processes in the hands of AI technologies, policy makers are worried that miscalculations will lead to a snowball effect of failures and negative impacts on people and businesses. Recent AI malfunctions have led to companies paying damages to affected parties. In February, for instance, Air Canada was forced to offer reparations to a passenger who was given inaccurate information by an AI-powered chatbot.

Hanson said that organisations should begin by ensuring that machines don’t make decisions without human input.

“It’s critical that AI is designed, guided and interpreted from a human perspective,” he said, offering as an example careful consideration about whether large language models have been trained on “bias-free, inclusive data or whether AI systems can account for a diverse range of emotional responses”.

“These are important considerations that will help manage the wider social risks and implications it brings, allowing businesses to tackle some of the spikier challenges that generative AI poses so its transformative powers can be realised,” he said.

Driving Improvements

Arun at ManageEngine, the enterprise IT management division of Zoho Corporation, said a well-crafted bill would do more than simply list what organisations should not do.

“This bill promises to go a long way in helping to tackle the risks that come from a lack of specialised knowledge around this relatively new technology,” he said. Such a bill “could give businesses guidance on how to prioritise trust and safety, introducing essential guard rails to ensure the safe development and usage of AI”.

Pointing to recent ManageEngine research that showed 45% of IT professionals only have a basic understanding of generative AI technologies and that most have no governance frameworks for AI implementation, he said that a bill would provide the confidence needed AI systems improve.

“Introducing legislation on safety and control mechanisms, such as a requirement to protect the integrity of testing data, will help guide the use of AI so businesses can confidently use it to drive business growth,” he said.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: ESG data sourcing and management to meet your ESG strategy, objectives and timeline

ESG data plays a key role in research, fund product development, fund selection, asset selection, performance tracking, and client and regulatory reporting, yet it is not always easy to source and manage in a complete, transparent and timely manner. This webinar will review the state-of-play on ESG data, consider the challenges of sourcing and managing...

BLOG

Nord vLEI Becomes First European GLEIF Qualified vLEI Issuer

Nord vLEI has become the first European-based verifiable LEI (vLEI) issuer qualified by the Global Legal Entity Identifier Foundation (GLEIF). Nord vLEI is a subsidiary of NordLEI, the leading LEI issuer in Scandinavia and eighth largest globally, with more than 165,000 LEIs issued since 2014. The vLEI is a digitised organisational identity that meets global...

EVENT

ESG Data & Tech Summit London

The ESG Data & Tech Summit will explore challenges around assembling and evaluating ESG data for reporting and the impact of regulatory measures and industry collaboration on transparency and standardisation efforts. Expert speakers will address how the evolving market infrastructure is developing and the role of new technologies and alternative data in improving insight and filling data gaps.

GUIDE

ESG Handbook 2023

The ESG Handbook 2023 edition is the essential guide to everything you need to know about ESG and how to manage requirements if you work in financial data and technology. Download your free copy to understand: What ESG Covers: The scope and definition of ESG Regulations: The evolution of global regulations, especially in the UK...