‘There’s a risk in missing the AI opportunity’


Balancing the risks of artificial intelligence (AI) with the opportunities it can bring for our customers and employees is a critical part of our governance of the technology. Steve Albrecht, Chair of our Artificial Intelligence Review Committee and General Counsel of Digital Business Services at HSBC, explains how we do it.

We know AI isn’t perfect

AI, and particularly generative AI, can make mistakes. That’s why we need a strong control infrastructure surrounding its use at HSBC.

We want to innovate and be competitive within financial services, but we want to do so in a controlled way that enables us to keep learning and improving.

To make good decisions about AI, we need a range of expertise at the table

First, we need technology experts, who understand how these tools work. Second, we need risk and control teams, including legal and regulatory specialists – the rules and laws governing AI will change fast, and the requirements will differ across jurisdictions. Third, we need subject-matter experts who understand how AI will work in their market or business line.

We also need people with sound judgement, who can look at these tools objectively and assess how they will be perceived and whether we are meeting our ethical standards.

A centralised approach allows us to manage risks

We follow a ‘hub-and-spoke’ model when it comes to the development of our generative AI tools. New ideas are often explored within the individual business lines and functions – or ‘spokes’ – around the bank.

These teams prioritise their ideas and submit them to our central ‘hub’. This hub is comprised of our AI Centre of Excellence – a network of colleagues with expertise in AI – and HSBC Group-level teams who provide oversight from a technology and risk perspective.

Ultimately, novel AI use cases are reviewed and approved by our AI Review Committee, a collection of senior executives with a diverse set of expertise.

Delivering impactful AI

Mathematician and developer Maisie Muir talks about our start-up culture and her passion for creating effective AI solutions.

Hundreds of AI use cases are being tested around the bank…

To get the best out of generative AI in the safest and most effective way, we have to be willing to experiment. There are many AI-based tools and products currently going through the initial experimentation – or proof-of-concept – phase within the bank, where they are tested in a safe and controlled environment.

…But they won’t all make it to production

When a generative AI or other novel use case is preparing to move to its pilot phase, it enters a governance review process.

The pilot phase is tightly controlled, with limited numbers of users, where we can test the tools to ensure they are working properly and delivering their expected outcomes. Use cases that we’re piloting include coding assistants and chatbots that can help our employees with specific tasks – we think these have high potential.

The final stage is scaled production, and from this point on we can monitor live performance over time to ensure our AI tools are operating properly.

We all have a part to play in making sure we’re using these tools responsibly

Focused AI tools are one way forward

There’s a science to building these technologies, and there’s a science to how we control them.

Multi-purpose tools like Chat-GPT, which can handle a range of tasks, can be very valuable. But we think a better place for HSBC to start is to focus these tools and embed them into specific processes or workflows – such as summarising a set number of documents and assisting with specific queries from our employees. We can control, study and learn from a narrow use case much more effectively.

There’s a risk in missing the opportunity

We don’t want to get left behind. We need to know what’s going on in the wider industry and meet our customers’ expectations, so we can provide the full level of service that they’ve come to expect in other parts of their lives.

We’re speaking to many external partners, including large tech providers and small start-ups, to understand what’s cutting edge and what’s happening elsewhere in the industry.

We moved our banking services onto mobile apps because the world moved in that direction. I think the same thing will happen in the world of AI.

Human intervention will always be vital

Gen AI is, by design, more creative than traditional AI. This means it’s ultimately making predictions and might sometimes get an answer wrong. That’s why we need to have a “human in the loop” and other controls to help us spot errors.

We all have a part to play in making sure we’re using these tools responsibly. And as they become more powerful, I believe strong human controls will become even more important, not less.

More on AI

Market insights in the blink of AI

Our AI Markets service uses natural language processing to open up a world of data and analysis for investors.

Humans + machines = an ethical AI future

It’s vital that when humans and machines work together, artificial intelligence is used responsibly and ethically, says EJ Achtner.

HSBC’s Principles for the Ethical Use of Data and AI

We’ve created a set of standards to ensure we use data and AI in the right way and in our customers’ best interests.