Australia Unveils ‘AI Assurance Framework’ for Government Use

Australia has shared a new framework for the “Assurance of artificial intelligence in government.” The framework aims to enable public confidence and trust in the safe and responsible use of AI by Australia’s state and territory governments. The framework calls for AI systems in Government to be tested and verified through pilot studies. It also calls for adequate feedback and response mechanisms and for the use of AI systems to be justified with clear documented explanations. The framework is based on Australia’s AI Ethics Principles (2019) which were designed to ensure that AI is safe, secure, and reliable.

“We recognise that public confidence and trust is essential to governments embracing the opportunities and realizing the full potential of AI..…. This requires a lawful, ethical approach that places the rights, well-being, and interests of people first,” the Australian Government states in the Framework.

Assurance in AI refers to the process of determining if an AI system adequately meets pre-established thresholds set to analyze if a system is fit for the use case. It enables understanding the benefits and risks of AI, applying mitigations, ensuring lawful use, and understanding operationality. It also allows demonstration, through evidence, that the use of AI is safe and responsible, says the Australian Government in the Framework.

In order to ensure best governance practices, Australia has called to consider “Cornerstones of assurance,” which are 5 mechanisms to be considered to ensure the effective application of AI systems. Based on these cornerstones, the framework then adapts each of the 8 AI Ethics Principles to practically apply to the use of AI in Government.

Implementing Australia’s AI Ethics Principles in Government

  1. Consider Human, societal, and environmental wellbeing
  • Document the effects of AI: The intentions and outcomes of AI must be documented to measure its impact on people, society, and the environment. While factoring in the systems’ risks, one must consider if there is a clear public benefit of the AI and possible non-AI alternatives.
  • Consult with stakeholders: Subject matter and legal experts, impacted groups and their representatives must be consulted to allow for the early identification and mitigation of risks.
  • Assess impact: One must assess whether the AI’s benefits outweigh the risks for people, communities, and societal and environmental well-being. Methods such as algorithmic and stakeholder impact assessments can be used for this.

2. AI systems should respect human rights, diversity, and the autonomy of individuals.

  • Comply with rights protections: AI in government must comply with legal protections for human rights. It must also align with related obligations for the public sector, workplace health and safety, human rights, and diversity and inclusion.
  • Incorporate diverse perspectives: The Government must involve people with different lived experiences, throughout the lifecycles of the AI to gather informed perspectives, remove preconceptions, and avoid overlooking important considerations.

3. AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities, or groups.

  • Comply with anti-discrimination obligations: Ensure anti-discrimination and train staff to be able to identify, report, and resolve biased AI outputs.
  • Ensure quality of data and design: Conduct audits of AI inputs and outputs to ensure unfair biases, demand data quality statements and conduct other data governance and management practices.
  • Privacy protection and security: AI systems should respect and uphold individuals’ privacy rights and ensure data protection.

4. Comply with privacy obligations

  • Minimise and protect personal information: Governments must consider if personal information is necessary, reasonable, and proportionate for each AI use case and if similar outcomes can be achieved with privacy-enhancing technologies. Processes like synthetic data, data anonymization and deidentification, encryption, and secure aggregation can be used to reduce privacy risks. Inform people when their personal information is being collected for an AI system or its training.
  • Secure systems and data: AI systems and an AI system’s supply chains must comply with security and data protection legislation consistent with the cyber security strategies and policies of impacted jurisdictions. Access to systems, applications, and data repositories should be limited to authorized staff as required by their duties.

5. Reliability and safety

  • Use appropriate datasets: AI systems must be trained and validated on accurate, representative, authenticated, and reliable datasets that are suitable for the specific use case.
  • Conduct pilot studies: Small-scale pilot environments must be created to identify and mitigate problems. However, one must also be cognizant of the trade-offs between governance and effectiveness while conducting such studies: a highly controlled environment may not accurately reflect the full risk and opportunity landscape, while a less controlled environment may pose governance challenges.
  • Test and verify: Red teaming, conformity assessments, reinforcement from human feedback, metrics and performance testing, and other methods can be used to test the AI system.
  • Monitor and evaluate the AI systems:  This should encompass an AI system’s performance, its use by people, and its impacts on people, society, and the environment, including feedback from those impacted by AI-influenced outcomes.
  • Be prepared to disengage: When an unresolvable problem such as a data breach, unauthorized access, or system compromise is identified one must be prepared to disengage. Such scenarios must also be considered in business continuity, data breach, and security response plans.

6. Transparency and explainability

  • Disclose the use of AI: Governments should maintain a register of when they use AI, its purpose, intended uses, and limitations.
  • Maintain reliable data and information assets: This will enable internal and external scrutiny, continuity of knowledge, and accountability.
  • Provide clear explanations: Governments should provide clear, simple explanations for how an AI system reaches an outcome. This includes the inputs and variables and how these have influenced the reliability of the system and the results of testing. They should also weigh the benefits of AI use against explainability limitations. When a decision is made to proceed with AI use, the reasons should be documented. When an AI system influences or is used as part of administrative decision-making, its decisions should be explainable.
  • Support and enable frontline staff: Staff must be employed to clearly explain AI-influenced outcomes to users and people. The government must consider vulnerable people or groups, people facing complex needs, and those uncomfortable with the government’s use of AI.

7. Contestability

When an AI system significantly impacts a person, community, group, or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.

  • Understand legal obligations: The use of AI in administrative decision-making must comply with laws, policies, and guidelines that regulate such processes. This includes principles of legality, fairness, rationality, and transparency, and access to reviews, dispute resolutions, and investigations. Where necessary, Governments should seek legal advice as to their legal obligations and proposed use of AI.
  • Communicate rights and protections clearly: Governments must create an avenue for all citizens to voice concerns and objections and seek recourse and redress. This includes communicating the channels and processes to challenge the use or outcomes of an AI system. Feedback and response mechanisms should be clear and transparent, ensure timely human review, and exist across the use case’s lifecycles.

8. Accountability

  • Establish clear roles and responsibilities: Governments should consider the role of senior leadership and area-specific responsibilities, security, data governance, privacy and other obligations, and integration with existing governance and risk management frameworks.
  • Train staff and embed capability: Governments should establish policies, procedures, and training to ensure all staff understand their duties and responsibilities, understand system limitations, and implement AI assurance practices.
  • Embed a positive risk culture: This fosters open discussion of uncertainties and opportunities, encourages staff to express their concerns, and maintains processes to escalate to the appropriate accountable parties.
  • Avoid overreliance: Governments should consider the level of reliance on their use of AI and its potential risk and accountability challenges. Overreliance can lead to the acceptance of incorrect or biased outputs, and risks to business continuity. Incorrect outputs must be flagged and addressed.

5 cornerstones of assurance are:

The principles for the use of AI in Governments span the following mechanisms of assurance:

Governance

The Australian Government suggests that the implementation of AI should be driven by business or policy areas and be supported by technologists. It also calls to adapt and update existing decision-making and accountability structures and in turn, invite diverse perspectives, designate lines of responsibility, and provide agency leaders an opportunity to understand the responsible usage of AI. It also suggests that Governance structures should encourage innovation while maintaining ethical standards and protecting public interests. Agency leaders must also commit to the safe and responsible use of AI by developing a “positive AI risk culture” which calls for making “proactive AI risk management an intrinsic part of everyday work.” Agencies must also provide the necessary information and training staff to use AI ethically and lawfully and identify, report, and mitigate risks. Staff could also use their training to support the community through changes to public service delivery.

Data governance

Data governance concerns creating, collecting, managing, using, and maintaining datasets that are authenticated, reliable, accurate, and compliant with relevant legislation. Reliable data governance ensures that responsible parties understand their legislative and administrative obligations. It also allows governments to minimize risks around the data it holds while gaining maximum value from it.

A risk-based approach

Risks in AI systems should be assessed and managed on a case-by-case basis to balance safety for high-risk models and minimal administrative burden for low-risk models. Governments should exercise discretion and employ safety measures such as traceability for datasets, processes, and decisions based on the potential for harm. In the case of high-risk settings, Governments should also consider oversight mechanisms such as external or internal review bodies, advisory bodies, or AI risk committees, to provide consistent, expert advice and recommendations. AI models must also be reviewed through their entire lifecycle from development to operation, transitions between lifecycle phases and during significant changes. Developers must be able to address emerging risks, unintended consequences, or performance issues, through feedback loops. Foresight can also be employed by planning risks presented by legacy AI systems.

Procurement

The Government must procure AI systems or products that follow the AI ethics principles, clearly establish accountabilities,  provide access to relevant information assets provide proof of performance testing throughout an AI system’s life cycle, and access to data. Government agencies must also ensure that their contracts are adaptable to rapid changes in technology and amplified risks such as privacy and security. Agencies must also conduct due diligence to manage these new risks. In order to ensure a sufficient understanding of a system’s operation and outputs, the agencies should consider internal skills development and knowledge transfer between AI system vendors and their staff. An ideal vendor must be able to support the review, monitoring, or evaluation of a system’s outputs in the event of an incident. This should include providing evidence and support for review mechanisms.

Standards

Where practical, governments should align their approaches to relevant AI standards.
Standards outline specifications, procedures, and guidelines to enable the safe, responsible,
consistent, and effective implementation of AI in a consistent and interoperable manner.

Also Read:

Originally Appeared Here

Author: Rayne Chancer