Generative artificial intelligence models like ChatGPT will change the way people interact with computers in a similar way to Siri more than a decade ago, says Bill Mark, who played a role in the development of the voice assistant familiar to all Apple users.
Large Language Models (LLMs) have shown a new level of capability in terms of answering questions because they’re trained on such a huge amount of text, he says. But while the technology has enormous potential to help us work more efficiently, it needs to be harnessed with an understanding of its limitations.
“It finds relationships that no human would have ever thought of, and some of them are clearly wrong, so we have to be very careful,” says Mark, who heads the information and computing sciences division of Silicon Valley-based non-profit research body SRI International.
Humans see the world through their lived experiences and understanding of people’s motivations. LLMs, though very powerful relationship-finders, lack access to this kind of knowledge, and can suffer from what Mark calls “model confusion” - where the LLM can use relationships that humans see as inappropriate or simply wrong, in turn producing incorrect or misleading answers.
AI researchers are looking at ways to solve this problem, including retrofitting safeguards into systems so that they don’t do certain things, says Mark, in conversation with Westpac’s chief technology officer David Walker.
Mark is a long-term practitioner and observer of artificial intelligence. Around 2000, his team at SRI developed a voice assistant to help pre-smartphone mobile users better locate services on their phone.
“The original idea was [to build] a ‘do’ engine, not a ‘search’ engine, because it was supposed to do things for people,” he recounts. They set up a spin-off company, which was quickly snapped up by Apple and adapted into the Siri we know today.
Mark never lost his desire to build a ‘do’ engine, and that has informed his more recent work in the development of private LLMs.
Unlike big open-source models, such as ChatGPT, private LLMs limit the amount of information the model has access to, in order to provide a more targeted service.
One example is Kasisto, an SRI spinoff company, which is rolling out KAI-GPT - billed as the first large language model specifically for the banking industry.
The KAI Answers platform harnesses the capability of KAI-GPT to help bankers locate, interpret, and understand information from a wide variety sources, from policies and regulatory filings to web content and complex financial products.
Westpac is already partnering with Kasisto to develop digital assistant chatbots for the bank’s apps and online banking, and is also in the process of implementing KAI-GPT.
“What’s unique about KAI-GPT is that it’s a banking industry-specific LLM, which means it’s more accurate, safe and intuitive while delivering ChatGPT-like conversational experiences,” says Westpac’s Walker.
Walker also shared insights from a recent experiment by Westpac’s software developers, which showed that their productivity in writing code could be enhanced by an average of 46 per cent through the use of generative AI. Academic studies in the US have produced similar results.
Meanwhile, Mark sees some of the concerns around the potential downsides of generative AI for broader society as overdone.
“We have to change our mindset. Instead of worrying if students are going to cheat on their exams by using ChatGPT, we should be teaching students how to use these tools to be more productive and effective.”
In a broader sense, learning how to prompt generative AI tools is likely to be a part of many job roles in the years to come, he adds.
Mark also offers a balanced assessment of the future prospects for software engineers, and others, whose jobs are impacted by the technology.
“As a tool it will increase their productivity. Does that mean we need fewer engineers, or that engineers will be able to do more wonderful, creative things, and bring in a lot more value?”
While safeguards are essential to ensure generative AI is used effectively, the onus is also on individuals to recognise its inherent limitations, and adapt accordingly, says Mark.
“I have faith in humankind and our innovative, creative ability to treat these things as tools and use them to do greater things.”