Skip to main content Skip to main navigation
Skip to access and inclusion page Skip to search input

Getting AI right needs more than a legal lens

07:19am November 15 2022

There are good reasons to discuss how to police responsible artificial intelligence use. (Getty)

I'm sure you know this feeling.

You've had an over-the-fence chat with your neighbour about how much you love their Pomeranian pooches, then, suddenly, your social media feeds are swamped with ads and videos featuring Pomeranians.

If you actually do love Pomeranians, how good is that? 

But for many of us, this seemingly magical and unsolicited connectivity can still feel a little bit creepy.

Of course, it's unlikely to be a coincidence. 

It’s more likely to be one of the myriad ways artificial intelligence – or AI – is being put to work these days in so many facets of our lives, in this case the AI-powered digital assistants in your phone. 

The uses for AI are rising exponentially – from self-driving cars to virtual travel booking agents to healthcare management, to name a few. The banking sector has also ramped up its use over the past five years, AI now providing help for all sorts of tasks including insurance claims assessments, fraud detection, mortgage applications and powering customer service chatbots.

But never far away from the conversation about how awesome, fast and personalised AI has made some services, is the debate about where the line is drawn in terms of its responsible and ethical application (and not making it too creepy).

In fact, the more advanced AI capability gets, the more consumers and communities are getting worried about how it – and personal data – is being used. 

This is certainly a key topic for us here at Westpac. 

We’ve spent a lot of time considering not only how we continue to leverage AI to improve services, but also the approach we take to applying AI responsibly, managing the associated risks and shaping and complying with legal frameworks as they inevitably emerge. 

Regulation of AI is high on the agenda of many lawmakers around the world. 

While at least 60 countries have developed some form of AI policy or ethical framework, according to the OECD AI Policy Observatory, the European Union is arguably the most advanced in terms of regulation. The EU Artificial Intelligence Act is expected to come into force next year, aimed at driving an ecosystem of trustworthy AI for EU citizens and organisations by setting up legal frameworks to protect consumer rights. 

Elsewhere, Canada recently tabled its Artificial Intelligence and Data Act, China passed a regulation governing companies’ use of algorithms earlier this year and the United States is looking at an AI bill of rights.

In Australia, the government has set out a voluntary AI ethics framework, to date taking a “soft law” approach to regulation, thereby leaving industries – including banking – to effectively self-regulate. Given where the world is moving, we believe regulation in Australia will inevitably emerge, both as a means to encourage adoption of AI, as well as to manage associated risks to build trust among consumers. 

Banks are well positioned to play a role in developing such regulation and are in advanced conversations about this with our local regulators and industry bodies. 

At Westpac, while we will continue to drive these conversations, we don’t tend to view responsible use of AI as a regulatory or legal obligation. 

We view it as a moral obligation. 

As a bank, we have always had a duty to customers, spelt out in strict policies, to ensure the privacy of customers' data and govern its use. 

This obligation now sits at the heart of a set of AI principles we created in 2018. 

It’s the same values-based approach that we apply not only to responsible use of AI, but across all areas of our conduct and operations. 

The most crucial aspect of these principles is ethics. Even if a customer has given us consent to use their data, it must not be misused. It’s not realistic or practical for any organisation to expect customers to be in a position where they can authorise every application of AI. So, it’s up to us to be ethical, to only use AI when it’s the right thing to do, and to ensure it’s used fairly and free from bias. 

This is what our customers expect of us, and what guides us.

While AI does throw up unique complexities, the way we manage it is no different in terms of our responsibilities in any other area of our operations.

There is no doubt there are very good reasons for the recent explosion of global interest in responsible AI and regulation. 

Because at its best, AI can bring incredible benefits to our everyday lives.

But when exploring new territory, there are many unknowns, so we must do everything in our power to avoid missteps which will only serve to erode consumers’ trust. 

 

Meggy Chung joined Westpac in 2020 as general manager for data platforms, responsible for the strategy, execution and operational leadership of data management. Previously, Meggy spent more than a decade in senior data and technology infrastructure roles at Citibank in Singapore and Barclays Bank in UK and, prior to that, in senior roles at British Sky Broadcasting, Accenture and Marsh McLennan. Meggy is passionate about inclusion and diversity and has played an active role in driving this agenda throughout her career.

Browse topics