6 reasons Google slacking on AI safety reports is a big problem

6 reasons Google slacking on AI safety reports is a big problem


AI is no stranger to heated discussion, but while the LLMs have only improved in capability and power, the number of new controversies shows no sign of slowing down. While we would expect new AI features to meet with controversy at every turn (see the ChatGPT-4o-powered Ghibli images for a recent example), it’s clear that AI companies are prioritizing the AI arms race over ethics and safety more than ever in 2025.

This has been the case for years, but developers have brushed off concerns as irrational fearmongering. However, Google’s decision to put safety reports on the back burner is the clearest example we’ve seen yet of reckless AI development. Google is prioritizing new Gemini features over safety reports, which can have catastrophic implications down the line. Here’s why this news is bigger than you might think.

The Pixel 9a in all four colors laying face down on a table.

Related

How the Google Pixel 9a’s on-device AI is worse than the Pixel 9’s

Gemini Nano XXS says hello

Google won’t prioritize safety unless it’s forced to

No oversight means unsafe products

A graphic with the text "Gemini 2.5" on it.

The most obvious takeaway from this report is that Google doesn’t prioritize safety while developing its AI products. We’ve known this since the earliest days of Google’s AI development, when the initial release of AI Overviews in Google Search gave dangerous and misleading answers. However, Google’s immediate retraction of the feature and eventual re-release suggested that it was committed to a basic level of safety. Nevertheless, there’s no guarantee that these events won’t keep recurring.

Google’s disregard of its commitment to safety research reports is much more insidious than releasing a broken product. While catastrophic AI failures regularly make headlines, it’s the decision-making process that goes on behind the scenes that can cause real damage. Biased decision-making may go unnoticed, but they have real-world implications. Without enforcement of regulations covering safe AI development, Google will continue to push safety research reports further and further back. The company is developing its AI in bad faith. There’s no way to trust Google unless we see timely AI safety reports.

Read the full document describing the voluntary commitments to develop safe and secure AI to see exactly what Google and other companies are abandoning.

Google handles massive amounts of personal data through Gemini

But what is it doing with it?

Gemini handles a colossal amount of personal data every day. While you can stop Google from using your data to train Gemini, this feature is disabled by default. Google regularly promises that it treats this data with respect, but the lack of transparency caused by its late safety report is cause for concern.

One of the eight points the AI safety reports must cover is proof that that companies are “Prioritizing research on societal risks posed by AI systems, including on avoiding harmful bias and discrimination, and protecting privacy.”

Without these safety reports, it’s impossible to tell exactly what Google is doing to avoid misusing this personal data. AI can draw incorrect or misleading conclusions when generating answers. If Google isn’t following the guidelines mentioned above, then we have no way of knowing if Gemini is using our personal data to create biased reports.

Link Image

Related

Google Gemini collects far more personal data than its rivals, surprising nobody

Its deep integration comes at a cost

Google won’t pull the plug on unsafe AI models

We may see increasingly unsafe AI releases

An image highlighting Gemini 2.5 Pro's performance against other top models.

Source: Google

The three principles of the voluntary AI commitments are safety, security, and trust. To honour its commitment to safe AI development, Google must “…make sure their products are safe before introducing them to the public.”

Without accompanying AI safety reports, it’s impossible to tell whether newly released AI models are safe to use. However, there is an incentive for Google and other companies developing AI products to release safe products to the public. Essentially, if a product is deemed unsafe, usage will drop.

However, despite past Gemini controversies, usage hasn’t dropped. Search traffic to the Gemini website continues to grow, indicating that user enthusiasm hasn’t been dampened by Gemini’s issues with hallucinations or misleading information.

Knowing that its customer base is more enthusiastic about new features than it is wary about safety is a clear indicator of Google’s priorities. It knows it can release unsafe AI models without significant backlash, so why should it make voluntary commitments to safe development?

AI companies are growing bolder in their unsafe development patterns

There’s a growing trend towards unsafe development

Google isn’t the only company to disregard these voluntary commitments. OpenAI’s safety report for Deep Research was weeks late, and Meta’s Llama 4 report was so vague that it might as well be useless.

Meta’s Llama 4 report was particularly indicative of how unhelpful these reports can be. While the report is filled with phrases like “Our cyber evulations investigated…”, “We conducted threat modeling exercises…”, and “mitigating safety and security risks inherent to the system,” Meta does not show results from these investigations and results.

Technically, Meta is complying with the voluntary commitments, but there’s no way to prove it. Google’s boldness in not releasing its safety reports on time shows that it’s not interested in maintaining even a pretence of safety reports.

Without enforcement of these commitments, companies will continue to follow Google’s suit and fail to release safety reports.

Governments are unwilling to enforce regulations

Voluntary commitments are pointless without enforcement

The Samsung Galaxy S25 running Google Gemini

At an international summit on AI in Paris in February 2025, the UK and the US refused to sign an agreement that pledged open, inclusive, and ethical approaches to AI development. The justification from both countries revolved around concerns of the economic impact of regulation of AI development.

This prioritization of growth over safety has directly resulted in events like Google not releasing AI safety reports. The original guidelines, written by the Biden-Harris Administration, state that “Companies intend these voluntary commitments to remain in effect until regulations covering substantially the same issues come into force.”

As indicated by US Vice President JD Vance at the AI summit, it’s likely that these regulations will not come into force under the Trump-Vance administration. Both the US and the UK have indicated to companies developing AI models that they will not enforce these voluntary commitments.

The users are becoming beta testers

It’s a rocky road towards stable releases

By prioritizing new features over safe and stable development, we can expect that new AI models are more likely to launch with broken features. This has been a caveat from the earliest days of AI, but the rapid development of LLMs has fixed many of the earliest issues with AI chatbots.

However, as the improvements in each new AI release become less and less significant, companies are pouring more and more resources into development to gain an advantage over their rivals. Part of this effort involves releasing newer models early and then fixing issues later on.

We saw this happen with AI Overviews in Google Search. Google rushed the feature out the door, then re-released it later after fixing the most glaring problems. This trend should decrease over time, but Google’s rush to release new models without a safety report indicates that first adopters of each new AI model will encounter significant problems and safety risks. While we expect Google to fix problems eventually, we will need to be especially vigilant when testing out new AI models.

We need companies like Google to commit to safe and stable AI development

In the first quarter of 2025, we’ve seen Google and OpenAI take significant steps back in their commitment to safe AI development. While these companies have provided us with impressive tools to organize our notes and answer questions, a lack of commitment to basic standards could mean a devastating misuse of our personal data. But if you want to keep using Google’s products, you may have to stomach these developments whether you like it or not.

A Nest Audio on a stack of books.

Related

When Google Assistant is phased out, what happens to our smart speakers?

Gemini could bring massive changes to our smart speakers



Originally Appeared Here