Google’s Gary Illyes and John Mueller, along with Bing’s Fabrice Canel and probably other representatives from these search engines spent time in Dublin this week to attend the IETF 121 Dublin meeting with the goal of submitting ideas on how to improve the Robots Exclusion Protocol. This covered improving crawl efficiency, adding new AI controls and more.
Gary Illyes noted these efforts and even mentioned IETF (Internet Engineering Task Force) on a Search Off The Record episode earlier this year.
And when I saw John Mueller from Google post on LinkedIn that he was in Dublin this week and then saw Fabrice Canel comment that he was there too, it got me thinking.
They both attended a last minute informal SEO meetup, based on the post and comments.
So what was presented at these IETF event? Well, the description says:
The IETF Hackathon and IETF Codesprint take place on the weekend. Events to help new participants get the most out of IETF meetings begin on Sunday afternoon. Participants should plan their travel accordingly. An introduction to IETF meetings provides an overview of how to prepare for and get the most out of sessions all week.
Digging deeper, you can see what Fabrice Canel submitted named Robots Exclusion Protocol Extension to manage AI content use. The abstract reads, “This document extends RFC9309 by specifying additional rules for controlling usage of the content in the field of Artificial Intelligence (AI).” I captured a screenshot of this page in case it goes away or changes.
Then it seems Gary Illyes had Robots Exclusion Protocol User Agent Purpose Extension with the abstract, “The Robots Exclusion Protocol defined in [RFC9309] specifies the user-agent rule for targeting automatic clients either by prefix matching their self-defined product token or by a global rule * that matches all clients. This document extends [RFC9309] by defining a new rule for targeting automatic clients based on the clients’ purpose for accessing the service.” I captured a screenshot of this page in case it goes away or changes.
Gary also has Robots Exclusion Protocol Extension for URI Level Control with the abstract “This document extends RFC9309 by specifying additional URI level controls through application level header and HTML meta tags originally developed in 1996. Additionally it moves the response header out of the experimental header space (i.e. “X-“) and defines the combinability of multiple headers, which was previously not possible.” I captured a screenshot of this page in case it goes away or changes.
All of these were submitted in October 2024 as drafts and potentially include clues on how Google and Bing may adapt crawling to improve efficiency and for the purposes of handling AI content. So click through to each and read up.
Here is a photo from the SEO meetup:
Update: John Mueller posted about this on LinkedIn and wrote:
Just a note that if you care about technical details, and care about the internet, engaging in standards groups can be interesting. The web did not fall from the sky (nor from a “cloud”); people who cared about details worked hard to make it work. These discussions are not finished. You can play a part too. It’s time-consuming, it takes energy, but imagine if a part of the “next web” includes ideas that you came up with? How cool. I’d link to that.
Forum discussion at LinkedIn.