The trust that binds us
What happens when you can no longer believe what you see nor what you hear?
What happens when all the data on our personal and professional lives becomes nothing but input for algorithms to freely consume and exploit? When machines claim ownership of our creations, our thoughts, and beyond? When corporates and tech overlords release the beasts with no regard about the potentially dangerous repercussions on humanity?
What will become of us — and the trust that binds us — when people misplace their faith in faceless algorithms?
Fueled by the data economy, AI has been growing at an exponential rate, with generative AI tools gaining huge popularity. The most notable is ChatGPT, created by OpenAI, which has gained 100 million users in a matter of weeks. With this unprecedented level of adoption, governments, as well as private and public companies, have struggled to adapt. While the tools are still rapidly evolving, the results have been mixed.
At the recent Finovate Europe conference, I talked about generative AI’s potential for hallucination episodes, including how one tool claimed cow eggs were bigger than chicken eggs (except cow eggs do not exist, of course), confused me with Arlan Hamilton (whom I deeply respect), and created profiles of me with details that did not match reality.
But the problems extend well beyond.
When you input the text from the last article that I wrote for FinTech Futures, the tool will claim that it wrote it. The same goes for pieces written by my good friends Leda Glyptis and Chris Skinner, as well as by well-known publications including The Economist and The Wall Street Journal. This is alarming. Not only are our own reputations at risk, it also raises questions about content ownership and intellectual property rights.
Lack of (true) citation and transparency have long been among the issues that many pointed out when such AI tools were rolled out. And recently, with the expanded availability via mobile devices, I can only imagine the acceleration of learning that will go along with it.
This brings up the very question of accountability. Who should be held liable for the results that these algorithms generate and the damage that they can do? How best can we balance the profit motives of technology companies and social responsibilities?
Unfortunately, it seems we won’t have to wait too long to find out.
According to the FTC, there has been an alarming increase in AI scams targeting vulnerable populations. Imagine receiving a call from a loved one claiming to be in financial trouble. Except that while the voice was real, the dialogue was generated by a machine. To illustrate the potential danger of such technology advancement on our society, Senator Richard Blumenthal recently commenced a Senate hearing with a fake voice recording written by ChatGPT and vocalised by an app that was trained using his Senate floor speeches.
What then should the financial services industry do? And what is our responsibility?
While we might not be able to completely stop AI hallucinations, we can, and must do our part, in ensuring that we are not putting people in harm’s way when we implement advanced technologies in our products and services, especially in financial services where fairness, transparency, reliability, and explainability are paramount. It is one thing when generative AI produces the wrong interpretation of a joke, but it is a vastly different and more serious matter when insurance claims or loan applications are denied and people’s well-being is at stake.
“We can’t wait for the tech industry to self-regulate or lawmakers to figure out how to keep our customers safe,” Mia Dand — founder of Women in AI Ethics comments. “It’s time for the financial industry to lead on AI ethics and education. It’s our responsibility to ensure that we are ready to protect our organisations and our customers from the looming tsunami of AI-powered financial fraud and misinformation.”
Instead of moving fast and breaking things (a common tech start-up mantra), we must move slow and make sure that we are not breaking things and eroding the trust that customers have in the system. Trust is the foundation of financial services — and once you lose it, it is hard to gain it back. And without it, the system will simply cease to function.
We must ensure the proper guardrails are in place — not only on the applications of AI by the financial services industry, but development and deployment of AI by the tech industry. Ethical considerations must be front and centre. We might already be late to the party, but we will face even more dire consequences if we don’t start now.
“In this age of unbridled AI hype, the financial industry has a unique opportunity to reclaim customer trust,” says Mia.
About the author
Theodora (Theo) Lau is the founder of Unconventional Ventures. She is the co-author of Beyond Good and co-host of One Vision, a podcast on fintech and innovation.
She is also a regular contributor for top industry events and publications, including Harvard Business Review and Nikkei Asian Review.