All things being equal: AI ethics in networking

All things being equal: AI ethics in networking

Paul Stuttard, director, Duxbury Networking.

Many technology innovators, developers and business leaders say the ethical principles that focus on the public good are often overlooked in most artificial intelligence (AI) systems.

AI ethics, says John Smith, co-founder of LiveAction − a US-based company devoted to providing “unlimited monitoring and unlimited control and complete visibility to every network” − is a set of moral guidelines and practices meant to encourage the advancement and responsible application of AI technology.

Smith maintains that when produced and implemented with ethical guidelines “AI has the amazing potential across the network monitoring space to save organisations significant time and resources when it comes to collecting, analysing, designing and securing networks”.

On the other hand, as AI increasingly matches human capabilities, there are concerns that AI technologies could potentially outpace organisations’ ability to control AI within an ethical framework. And, as AI becomes more integral to corporate networking, could ethical considerations around data privacy, bias and transparency fade in importance?

It is crucial to establish who in networking is responsible for AI-driven choices and actions.

In light of this, one of the most difficult tasks facing network managers and administrators will be to identify the moral issues surrounding the security of user data and network information.

Strong data privacy safeguards must be put in place as part of ethical AI in networking to guarantee that sensitive data is handled securely. This covers adherence to data protection laws, encryption and access controls.

In this context, networking AI algorithms ought to be visible and comprehensible. Understanding how AI-driven decisions are made is crucial, and network administrators and end-users should be aware of this. Accountability and trust will grow as a result of this transparency.

Importantly, biases from the data on which AI is trained might be inherited by AI algorithms. This may result in the exclusion of some groups from networking or inequitable treatment of particular users. In order to provide just and equitable network management, ethical AI in networking necessitates detecting and eliminating prejudice in AI systems.

Capitol Technology University, which states it “supplies human capital to America’s most technologically-advanced organisations”, attests to this. A recently-published editorial points out that cultural prejudices are often ingrained in the vast volumes of data on which AI systems depend.

As a result, these prejudices may be embedded in AI systems, which might then reinforce and magnify unjust or discriminatory results in vital domains like banking, human resources, criminal justice and resource distribution.

For example, AI systems in networking are frequently asked to make important judgements about resource allocation, such as prioritising bandwidth. Ensuring these choices are just and do not discriminate against any specific applications, users or groups is a matter of ethics.

Does the use of ethical AI reduce the need for human intervention or, at the very least, human supervision in networking?

Even if AI is capable of automating any number of networking operations, many people still think that human monitoring and involvement is essential, particularly when moral dilemmas emerge.

Providing users with control over their network preferences and clear options is a key component of ethical networking practices. Users ought to be in charge of how their data is used, and they should give their informed consent before their data is managed by an AI-driven network.

As Smith says, setting guardrails and standards will be key. “Unchecked AI is universally considered a recipe for disaster.”

Ethical AI in networking requires continuous auditing and observation of AI systems to make sure they function as intended and do not progressively stray from ethical norms over time. It is thus crucial to establish who in networking is responsible for AI-driven choices and actions.

From an ethical perspective, concerns about AI and job displacement are valid. However, there are several arguments to suggest AI has the potential to create far more jobs than it destroys.

Many AI advocates believe this could be achieved by a number of AI-created proactive measures which include retraining programmes. Already, the rise of AI has resulted in the upskilling of the workforce creating demand for data scientists, AI specialists and machine learning engineers.

There is no denying the importance of training. To encourage responsible AI development and application, network administrators and AI developers should strive to obtain the necessary knowledge and expert training in all ethical AI practices.

Ethical AI in networking is an evolving field and as AI technologies are increasingly incorporated into network infrastructures, ethical issues will become ever more crucial to ensuring they are applied equitably.

Up until now, developing and promoting moral AI practices in networking has been the duty of researchers, practitioners, legislators and business leaders.

However, if the planned AI Act by the European Union is approved, the landscape may change. The Act includes a set of laws and regulations aimed at making AI more reliable by making sure its systems uphold morality, safety and fundamental rights.

It is the first comprehensive legal framework of its kind. Will its principles be globally adopted – and enforced − any time soon?

Originally Appeared Here