If 2023 was the year of AI hype, will 2024 be the year of AI governance and responsibility?
We ended 2023 with AI dominating our headlines. And as we begin the new year, the AI story continues.
The recent CES (Consumer Electronic Show) in Las Vegas was a great example, where practically everything got an AI label, from pillows and mirrors to vacuum cleaners and washing machines. I thought smart toothbrushes from a few years ago were silly enough. Little did I know that would be nothing compared to what we’ve experienced in the past 12 months.
According to Crunchbase data, generative AI and AI-related start-ups raised approximately $50 billion in 2023, including the likes of OpenAI and Anthropic. If 2023 was the year of AI hype, will 2024 be the year of AI governance and responsibility?
It does appear that there are still quite a few bridges yet to be crossed.
Various mishaps with AI over the past year have given us a glimpse of the potential problems that can arise when AI hallucinates. Those of us who tried generative AI in the early days can likely recall instances when AI chatbots made up ‘facts’ when responding to prompts. Remember the cow egg story that I wrote about previously? Or the ‘debate’ on whether Australia actually exists?
It is one thing to have AI provide the wrong recommendation on books or music. It is much more damaging if the AI chatbot makes factual errors in response to something as serious as election questions, for example, especially if users treat the outputs of these chatbots as facts.
And what about instances when inaccuracies are worse with non-English languages, such as German and French? This is especially concerning because it seems to reinforce the idea that the dominance of the English language in AI chatbots is creating a language bias.
This is, sadly, hardly surprising. The quality of the outputs is largely dependent on the data that the models are being fed. If AI models are trained primarily on data in English or with a US-centric viewpoint, for example, considering that the US accounts for less than 5% of the total world population, the training data cannot possibly be representative of the much larger and diverse world that we live in.
And how will the Global South, which represents 85% of the world’s population, be able to gain benefits from this emerging technology in an increasingly digital society if they don’t use English as their primary language and if their culture is not being represented? And what about those who are not digitally connected and those who are not digitally literate?
Whose interests does the technology evolution truly serve and who is this technology truly for?
Trust takes years to build, but it can be lost in minutes. In a world where you can no longer believe what you see nor what you hear, what does that do to trust — the basis of any human relationship?
Sadly, AI hallucinations are not the only trust issue that we have to contend with. Just as how social media can be used to create and spread fake news, AI can weaponise that action and take it to the next level, allowing people to create and share deepfake audio and video faster, cheaper, and in a more believable manner.
Imagine cybercriminals using a deepfake voice to impersonate someone and scam their unsuspecting victims, as in the real-world case of a senior executive of a UK-based energy firm who received a call from someone whom he thought was his boss to transfer money to a supplier. Or the banker who received a phone call from a supposed investor, except that it was from a scammer who tried to use AI to generate the voice of the investor, trying to trick the banker to move money elsewhere.
With voice cloning technologies becoming more readily available and with unparalleled precision, how can we best combat fraud and protect people from harm?
Responsibility and accountability
How then should we benefit from generative AI technologies? Where do we draw the lines, and when things go wrong, whose responsibility is it?
AI is one of multiple potential risks to the financial markets that was singled out by the Financial Stability Oversight Council in their latest annual report in 2023, citing the possibility that AI systems with explainability challenges could produce or mask biased or inaccurate results, thus negatively impacting consumer protection considerations such as fair lending.
This was echoed by the CFPB, citing the role that regulators must play to safeguard the financial system against risks posed by the growing role of large technology firms serving as cloud infrastructure players and their large pools of data that power new uses of AI.
While fair lending rules have been in place for a while now, the use of AI could exacerbate the likelihood of disparate outcomes if we don’t pay attention. With vast amount of data from diverse sources being employed in increasingly complex models, financial institutions and fintechs must ensure that proper controls are in place to ensure compliance, privacy, and security, to look for errors and biases in the data that might lead to unintentional consequences, and to ensure adequate protection when it comes to receiving or sharing data with external parties for data modelling.
The increased coupling between technology and financial services is transforming the risk landscape as we know it. Are we prepared?
What matters most
I read recently that using GPT-4 to summarise an email is like getting a Lamborghini to deliver a pizza, because of the massive compute power that the AI tool needs.
At the end of the day, while it is important to innovate and take advantage of the positive potential of AI, we must also be aware of the risks and harm that it can bring and foster a pathway that will prioritise human well-being and equity. And as the saying goes, just because we can do it, doesn’t mean we should.
AI will change how we work, earn, and live. We must ensure that issues around privacy, fairness, and biases are addressed, and safeguards are put in place so that more people can realise the benefits of the technology. Ultimately, it’s about what the technology can deliver, and not the shiny new toy itself.
About the author
Theodora Lau is the founder of Unconventional Ventures, a public speaker, and an advisor. She is the co-author of The Metaverse Economy (2023) and Beyond Good (2021), and host of One Vision, a podcast on fintech and innovation. She was named one of American Banker’s Most Influential Women in FinTech in 2023. She is also a regular contributor and commentator for top industry events and publications, including BBC News and Finovate.