Catriona Wallace has a blunt message for organisations: don’t wait for governments to dictate ethics frameworks to govern artificial intelligence and tackle diversity challenges that flow from its increasing use.
Dr Wallace, founder of Australian Securities Exchange-listed Flamingo AI, said while the federal government was rightly pursuing the nation’s first AI ethics framework, the reality was organisations would have to constantly evolve because of the sheer pace of AI development with billions of dollars flowing into the space. The CSIRO’s Data61 recently consulted on the framework – which considers the principles of fairness, transparency and explain-ability, privacy, contestability – after the government provided $29.9 million in the Budget to “support the responsible development of AI”.
Dr Wallace said the principles being explored made sense, but urged organisations to also develop their own AI frameworks that consider the ethics, human rights and diversity impacts of AI to ensure that divisions in society were not perpetuated through code and citing predictions that 90 per cent of jobs that are set to be automated will be held by women and minority groups.
“It's a critical topic, I'm not sure that there's enough being done yet,” she says.
“This sector is moving so fast that right now, $89 billion worth of investment is going into AI companies this year alone. The technology is far faster than the law or the regulations. There's a whole challenge around are we regulating the code, are we regulating algorithms?
“I speak to government regularly about this and the challenges it poses but it brings it back onto the individual organisations themselves, plus the vendors who are providing AI, to really provide leadership now…to a degree the train has already left the station.”
Sydney-founded Flamingo AI provides conversational AI solutions to businesses, such as chatbots that utilise machine learning (ML), to amplify legacy systems, augment human roles and “make employees much better at what they do”. At its first half results announcement in May, Westpac became the latest major company to unveil a new chatbot, known as Red, utilising IBM Watson AI technology and available to almost 5 million digital customers.
Dr Wallace, who started her career in the police force, said the workforce should prepare for the rise of the “Human Assisted Virtual Assistant” (HAVA) and the “Human Assisted Machine Assisted worker” (HAMA), resulting in employees “going into a work environment where their buddy or their teammate is a robot – a machine”. But along with improved productivity and customer outcomes from AI comes a human cost. In the next two years, Dr Wallace predicts that 1.8 million jobs will be removed from the workforce while 2.3 million new jobs will be created.
But not all of the displaced workers will shift into the new jobs being created, creating challenges for organisations and people, she said. She added call centres were particularly vulnerable to AI, and that training and education had a key role to play in the transition.
“(There’s) a huge responsibility to start thinking about what are the ethics, what are the human rights frameworks and how are we not going to just code in diversity problems?” she said. “At the moment, one of the real challenges we have is that 90 per cent of the coding that's done of these machines is done by males, and typically young males.”
The governance of AI is getting greater attention globally. In March, the Bank of England and the Financial Conduct Authority surveyed more than 200 financial services firms on AI/ML adoption, noting ethical and other challenges around how data is used going forward and the role of people in the processes.
“By and large, firms reported that, properly used, AI and ML would lower risks – most notably, for example, in anti-money laundering, KYC (know your customer) and retail credit risk assessment. But some firms acknowledged that, incorrectly used, AI and ML techniques could give rise to new, complex risk types - and that could imply new challenges for boards and management,” the BOE’s James Proudman said in a speech last month.
Dr Wallace said a key challenge with ML was that it’s built through supervised training, or data fed by a human that effectively acts as a teacher, raising the issue of conscious or unconscious bias being passed on. More broadly, she noted research group Gartner was predicting that 85 per cent of AI outcomes in the next two years will lead to errors and mistakes.
“The diversity challenge is real. It's already existing,” she said, adding that while employing women engineers wasn’t always easy “it wasn’t impossible”.
“If you Google the best CEOs in the world … the images will be 90 per cent white males. If you Googled best dressed people in the world, it'll be women in evening gowns.
“So the way we think about challenging that is … making sure the data that is being used to code the machines is free from bias…and (it’s about) the teams of people – if the women engineers are not available, have other people from diverse genders and diverse men from minority groups to be representative on the teams that are responsible for AI development.”