If we don’t get real about AI ethics, that’s on us

If we don’t get real about AI ethics, that’s on us

(Neil Raden, 1952 – 2024)

On May 18, 2024, Neil Raden, outspoken advocate for a different kind of AI ethics, passed away. Neil was a long-time diginomica contributor; for the past four years, I was his diginomica editor. As I look back, Neil would have not wanted to dwell on the enterprise dramas we all find ourselves drawn into at times – himself included. 

But Neil would be profoundly disappointed if we lost track of the ideas he was pursuing – and the necessary conversations he wanted us to have. 

Neil’s editorial range was broad – he wrote substantive pieces on everything from quantum computing to the biased algorithms that plague the insurance industry. One thing that separated Neil from many “analysts”: he had a deep mathematical background. It was astonishing how many industry jobs Neil could speak to, often from personal experience. This put Neil in an authoritative position to analyze the flaws in AI architectures, while calling out enterprise AI pitfalls like messy data. But it was the ethical naivety/elitism/hypocrisy around AI that drove Neil into searing prose. As I wrote on his friend Cindy Howson’s LinkedIn page: 

I was Neil’s editor at diginomica the last few years and he wrote some vital work, particularly on the architectural pitfalls of gen AI/AI ethics but also on the machinery of algorithmic discrimination. He spoke the truth and was not afraid of the fallout… I need some time to absorb this; we messaged frequently this spring about his various health adversities and a very difficult bout with covid. His main concern and preoccupation was for his family that he adored and his absolute determination not to leave them. 

For my part I feel we have lost his voice at the worst time, amidst profound changes he was able to parse at a technical/historical depth and moral conviction few could match. We’ll have to carry on and redouble our own efforts with his brilliance as a very high bar.

There is no way to know what Neil was going to say next; checking my inbox will not be as spicy and unpredictable now that he is gone. But we know enough from Neil’s body of work to pick out essential themes. I could do this on any number of topics, but Neil had a burn to unravel the onion of AI ethics. Despite the lip service Neil eviscerated on many occasions, there may not be a more important topic – so let’s pull out a few of his crucial pieces/concepts. 

Ethical debt is now a major problem for software development

Neil believed that AI ethics could not take place in bucolic symposiums like Davos. If we want to save “ethical AI” from an infamous list of enterprise oxymorons, it must be integrated into the development process. This led to one of Neil’s most intriguing/notable articles: Yes, ethical debt is a problem for AI software development: 

Much like technical debt, where expedited solutions and shortcuts can lead to more significant problems down the line, ethical debt refers to the compromises made in ethical considerations during the developmental phases of AI technologies. In other words, it’s time to “move fast and break things,” to quote Mark Zuckerberg’s old motto. Of course, when you break things, you might have to fix them later — at a cost.

Neil warned: 

The false promise of “solving” bias computationally obscures the larger issue: bias is pervasive. But there is an industry-wide strategic switcheroo lurking here, by a beguiling diversion. Instead of being a problem, bias becomes part of the broad picture of AI innovation. Fear not, AI will solve the problem. It’s like Hunter S. Thompson’s definition of Gonzo Journalism: start a fire and report on it.

The real risk is kicking the can down the road: 

The rapid innovation in AI technology harbors the risk of accumulating ethical debt, a phenomenon with array of reasons ranging from a focused race to market dominance to unintended biases and consequences, a lack of accountability, and the undermining of societal norms and structures.

Stop hand-wringing about AI bias – and start measuring AI fairness

Neil wasn’t a fan of the virtue signaling around confronting AI bias. In How can we measure fairness beyond bias, discrimination and other undesirable effects in AI?, he explained how AI systems lead to ‘black box’ problems:

When a claims adjuster denies a cancer patient’s claim for drug therapy, they likely feel at least the slightest tinge of remorse. When a rules-based system makes that decision, there is undoubtedly no remorse, but if that decision is questioned, how that decision was made can be discovered through a trace of the rule firings. But when that decision is made by an inferencing algorithm generated by a machine learning model, there is neither remorse, nor code to trace. There is no code. In this case, it is impossible to determine if the decision was fair. 

What should we do about it? 

Building confidence in AI delegated or algorithm-based decisions require three elements:

  • transparency in design and implementation
    explaining how a decision was reached
    accountability for its effects

In this context, performing and documenting a fairness analysis and the actions taken to solve the findings can be of great use.

Bias is a rabbit hole. AI fairness, on the other hand, can be measured and reviewed: 

Be careful when using the term “bias” because it has so many meanings. In AI today, they are mostly negative, but they aren’t entirely. Fairness is a far more ineffable quality, but in the end, it’s the most important one.

Why is AI ethics failing? 

Neil took a number of swings at this vexing topic, including Why are we failing at the ethics of AI? A critical review. Calling out “ivory tower input,” Neil wrote:

The assumption behind lecturing people about ethics is that they don’t know right from wrong. Most do. They just don’t know what to do about it.

The predicaments of AI follow a historical pattern: 

My article,  AI and Human Rights, delves into this. That each burst of technology often has devastating effects on human rights.

AI ethics is too academic – project concerns are practical

Neil saw plenty of enterprise projects on his watch; he had no patience for an overly-academic approach (It’s time for AI ethics to get real). Beware “ethics washing”: 

This is the problem known as ‘ethics washing’– creating a superficially reassuring but illusory sense that ethical issues are being adequately addressed to justify pressing forward with systems that end up deepening current patterns.

If we don’t change the AI ethics conversation, Neil doesn’t like our chances: 

If someone doesn’t see the need to apply some moral thinking to their work, they shouldn’t be developing decisioning systems. AI has enormous potential to be weaponized in ways that threaten privacy, regulations, the stability of your business, and your reputation, or to be deliberately maleficent. What practitioners need is offerings that stress the practical – what to do, what not to do, and how to decide when faced with uncertainty.

Ethics in the real world is about policy – and power structures

Neil documented the struggle to create enlightened AI policies. With a couple of exceptions, he wasn’t impressed. Most policies, he argued, didn’t reckon with structural power (We’re stuck in the AI ethics fishbowl – so how do we get out?): 

Engaging with the dangers of scale that AI-related algorithmic systems pose requires understanding and accounting for the underlying power structures. This is especially true where AI systems are adapted to work within ongoing systems of power and oppression to scale the effects of those systems efficiently.

AI ethics needs a broader, historical and structural understanding of the challenges we face.

The work here is unfinished. As Neil conceded: “The study of ethics is discursive – it has yet to give a final answer.” To get there, we must do better: 

All the talk about ethics is simply that: talk. The bulk of discussion on this topic is a giant fishbowl or echo chamber.   

That’s unsparing, but is it wrong? Neil was a stern critic, but he did offer up solutions: The last mile in AI deployment – answering the top questions. He also issued praise where he felt it was due, which led to surprises: NIST’s AI risk management framework – is this a way forward for AI ethics, and trustworthy AI?

It’s worth noting that Neil had some degree of wonder and optimism about generative AI, even while dissecting its technical limitations. Some pundits have made a social media cottage industry out of dismissing generative AI as hype machine fabrication. Steeped in the “AI winters” of tech history, Neil understood the roots of the gen AI accomplishment, while remaining concerned about problematic outcomes.

Neil was dismayed at the ethical platitudes that pass for discourse. At times, he let loose. As his editor, I would ask him if he really wanted to go there. Sometimes, he did. But if there was a way to carry the conversation forward, he would try to find it. I guess you could say Neil believed in reconciliation, but never at the expense of ethics. His friends and family were the tonic against an enterprise discourse that frequently disappointed him. Then he would regroup, and push into vital new topics – such as the AI potential of causal inference.

Neil and I struggled to find the right communications medium. He preferred jugular email correspondence; I valued our phone conversations, where I picked up on a warmth that is hard to achieve in email. In our final months of knowing each other, we stumbled upon an unexpected way forward: LinkedIn messages. Here, we discussed many things beyond the enterprise – including exchanges about mortality and human frailty I won’t forget. It’s fair to say that in his last months, Neil was reflective about life, questioning things he hadn’t before, visceral about his love of family. The enterprise, I suppose, faded for him a bit. But as this piece demonstrates, his diginomica archives have plenty to offer on that front.

Neil would not want us to lose track of these discussions – and we won’t. For my part, I resolved to go much deeper into AI than I have with any other enterprise technology. Neil was something of an unwitting mentor; his moral conviction and technical depth provide a welcome KPI for my own work.

Neil’s voice on diginomica can never be replaced, but we’re not done here. We’re pressing on with AI analysis, practical use cases, and outspoken positions. We can’t replace Neil, but we can honor him. Coming up short when tech needs honest voices would not be acceptable to Neil. Now, that’s on us.

Originally Appeared Here