A leading figure in the City of London has warned that the rapid deployment of artificial intelligence across financial services has outpaced the ethical frameworks needed to control it, likening the industry’s approach to “building a Ferrari without brakes.”
Delivering the keynote address this morning at Funds Europe’s annual FundsTech conference in London, Professor Michael Mainelli, chairman of Z/Yen Group, said firms had prioritised performance, speed and scale while neglecting values and accountability.
“We have built the most powerful cognitive tools in human history, and we shipped them without an ethics manual,” he said, adding that many organisations had deferred responsibility to regulators rather than embedding ethics into engineering and operations.
Mainelli, a former Lord Mayor of the City of London, cautioned that AI systems, driven by historical data, inevitably inherit bias. Attempts to “remove” bias are flawed, he argued, because all datasets reflect subjective choices. “Your model isn’t learning ethics… it’s learning how to be a highly efficient mirror of a digital sewer,” he said.
The consequences, he warned, are becoming more immediate. With financial markets moving toward faster settlement cycles, such as T+1, flawed algorithms could execute trades and cause damage before regulators can intervene. “We are building systems that outrun our ability to say stop,” he said.
He also dismissed the widely cited safeguard of keeping a “human in the loop” as ineffective. Studies in sectors such as defence and aviation show that humans tend to defer to automated decisions over time, often acting faster and with less scrutiny. “The human in the loop is a complete fiction,” he said.
Mainelli argued that the financial sector must adopt structured ethical frameworks similar to those used in medicine, based on principles such as avoiding harm and ensuring fairness. He highlighted emerging standards like ISO 42001, which focuses on AI risk management and accountability, as well as broader “community-enforced standards markets” that underpin sectors from aviation to food safety.
Without such systems, he warned, the gap between an AI system’s objectives and societal values could lead to serious harm. He cited past scandals, including algorithm-driven errors in public administration, as evidence of what can go wrong when human oversight fails.
Looking ahead, Mainelli called for a shift from what he described as the “bonkers approach” of training AI on vast, unfiltered internet data toward more intentional design grounded in human values. He also outlined new research aimed at embedding ethical reasoning directly into AI systems.
“Ethics is not a garnish,” he concluded. “It’s the main course. If we don’t install a conscience in these systems, we risk them making decisions before we’ve decided what matters.”






