5 Dangerous Myths About AI Ethics You Shouldn’t Believe

5 Dangerous Myths About AI Ethics You Shouldn’t Believe


Most people still misunderstand key aspects of AI ethics, from the myth of neutrality to the real … More source of accountability.

Adobe Stock

AI can empower just about any business to innovate and drive efficiency, but it also has the potential to do damage and cause harm. This means that everyone putting it to use needs to understand the ethical frameworks in place to keep everyone safe.

At the end of the day, AI is a tool. AI ethics can be thought of as the safety warning you get in big letters at the front of any user manual, setting out some firm dos and don’ts about using it.

Using AI almost always involves making ethical choices. In a business setting, understanding the many ways it can affect people and culture means we have the best information for making those choices.

It’s a subject there’s still a lot of confusion around, not least involving who is responsible and who should be ensuring this gets done. So here are five common misconceptions I come across involving the ethics of generative AI and machine learning.

1. AI Is Not Neutral

It’s easy to think of machines as entirely calculating, impartial and emotionless in their decision-making. Unfortunately, that’s very likely not the case. Machine learning is always a product of the data it’s trained on, and in many cases, that means data created by humans. It’s quite possible it will contain many human prejudices, conjectures and uneducated opinions, and this is where the problem of AI bias springs from. Understanding how bias passes from humans to machines is key to building tools and algorithms that will reduce the chance of causing harm or worsening societal inequalities.

2. AI Ethics Will Be Driven By Geopolitics

America has been the global leader in AI research, development and commercialization for a long time, but the facts show that China is catching up fast. Today, China’s universities are turning out more graduates and PhDs in AI fields, and AI tools developed by Chinese businesses are closing the performance gap versus U.S. competitors. The risk (more of a certainty) is that players in this high-stakes political game start to think about where ethics should be sacrificed for efficiency.

For example, openness and transparency are ethical goals for AI, as transparent AI helps us understand its decisions and make sure its actions are safe. However, the need to keep secrets that confer a competitive advantage could affect decisions and attitudes about exactly how transparent AI should be. China is known to have relied heavily on the open-source work of U.S. companies to build its own AI models and algorithms. Should the U.S. decide to act here to try to preserve its lead, it could have implications for how open and transparent AI development will be in the coming years.

3. AI Ethics Are Everyone’s Responsibility

When it comes to AI, it’s important not to assume there’s a centralized authority that will spot when things aren’t being done properly and ride to the rescue. Legislators will inevitably struggle to keep up with the pace of development, and most companies are lagging when it comes to establishing their own rules, regulations and best practices.

It’s hard to predict the ways that AI will change society, and inevitably, some of those will cause harm, so it’s important that everyone understands the shared responsibility to be vigilant. This means encouraging free-flowing channels of conversation around the impact and ensuring that transparency and ethical whistleblowing are encouraged. Because it will affect everyone, everyone should feel they have a voice in the debate around standards and what is and isn’t ethically acceptable.

4. Ethics Must Be Built Into AI, Not Bolted On

Ethical AI is not a “nice-to-have,” and it isn’t an item to be checked off a list just before a project goes live. At that point, any uncertainty around flaws such as biased data, the potential to breach privacy or safety assessments will be baked in. Our approach to ethical AI must be proactive rather than reactive, which means assessing every step for the potential to cause harm or ethical breaches at the planning stage. Safeguards should be included in strategic planning and project management to minimize any chances that data bias, lack of transparency, or privacy breaches will lead to ethical failures.

5. Trust Is Paramount

Ok, so the final and very important thing to remember is that we don’t just do ethical AI because it gives us a warm, fuzzy feeling inside. It’s because it’s absolutely critical to using AI to its true potential.

This is largely due to one word, which is trust. If people see that AI is making biased decisions, or that it’s being used without accountability, they simply won’t trust it. Without trust in AI, people are unlikely to share the data it relies on or adopt it in practice.

Overall, the level of trust that society puts in AI is what will eventually determine whether it achieves its potential of helping us solve big, difficult problems like tackling climate change or inequality. Ethical AI is about building trust and making sure we don’t scupper its hugely positive potential before we can put it to work.



Content Curated Originally From Here