Implications
The implications of Sharma’s resignation extend beyond internal corporate governance and reveal the structural limits of relying on private firms to manage risks with geopolitical consequences. While AI safety requires regulatory oversight, precision, and restraint, companies and some states, such as the US, are racing to develop powerful models as quickly as possible without proper safety baselines. The erosion of “safety-first” branding around innovation is shifting to be a revenue and leadership race with very little morals, led by American and Chinese innovation hubs, while deepening the ongoing tensions. If senior safety figures increasingly feel misaligned with leading organisations, companies will struggle to retain expertise for long-term risk mitigation and lose their influence over critical industries such as defence, bioweapons, and international crisis management.
For Transatlantic Governance and Global Stability
The fragmentation between AI safety and cooperation agendas was clearly observed during the 80th meeting of the UNGA during the related discussions. The US approach prioritises innovation and is not eager to create global binding standards. Although the EU has been the gatekeeper of security and safety of digitalisation and AI throughout the years, they went into loosening restrictions of the EU Cybersecurity Act in December 2025 in order to attract investments of tech companies and support the startup ecosystem.
Policy and ideological stepbacks around safety reflect the broad problem of how to reconcile ethical safeguards with political and economic interests. Alongside the ethical problems, current setbacks and pressures caused by governments demanding wider use of AI in defence may lead to increased risk of cyber and hybrid attacks due to increasing grey areas and lack of transparency around safety measures. AI safety, therefore, becomes important not only for misuse prevention but to ensure stability.
Regardless of the current conflicts, recent developments at the United Nations underscore this urgency. As highlighted during the 80th session of UNGA discussions on strengthening international AI governance mechanisms, member states increasingly recognise that voluntary corporate commitments alone are insufficient to address systemic risk. Multilateral dialogue is gradually shifting from principle-setting toward institutional coordination and oversight.
The establishment of IISP-AI earlier this month, with strong support -regardless of the US’s opposition- signals an emerging recognition that AI safety requires structured, international institutional frameworks capable of bridging technical expertise, policy coordination, and security considerations. While it is unknown for the moment if such an initiative can translate into governance architecture, its emergence with strong support reflects a growing awareness of AI safety’s strategic importance at the global level.
In this perspective, IISP-AI represents an early institutional response to the erosion of purely corporate safety governance. It signals recognition that AI safety must be embedded in structured, international mechanisms rather than left to internal ethics teams operating under commercial and political pressure, as Sharma was facing in Anthropic.






