Google’s John Mueller said that llms.txt files currently appear to function only as a random text file and are not used by mainstream search engines or AI systems.

A discussion on Reddit recently asked whether llms.txt files actually help websites appear in AI systems or LLM-powered search tools.

LGtbfk81vz4kz0yenxi3JhROSFs5Py6e078vNvPwYrshcBBBBAAAEEENgDgf8AUvalfV2tlSIAAAAASUVORK5CYII=

The idea behind llms.txt is somewhat similar to robots.txt. The file is supposed to give AI models or AI agents guidance about how a website should be accessed or understood.

But according to Google’s John Mueller, there is currently no evidence that mainstream systems actually use it.

Responding in the discussion, Mueller wrote that “nobody has shown that they are used as anything other than a random text file for mainstream consumer search engines or AI systems.” He also added a short piece of advice: “Save your energy.”

The comment suggests that implementing llms.txt likely has no measurable impact on visibility in search engines or AI platforms today.

Part of the reason may simply be technical. Traditional search engines like Google rely on crawlers that decide what to fetch before downloading pages. That is why files like robots.txt exist.

Large language models work very differently. They typically rely on already collected datasets, APIs, search engines, or they fetch individual pages only when needed. Because of that, a file like llms.txt doesn’t actually control access to the content in the same way robots.txt does.

In other words, it acts more like a voluntary signal than a real technical control.

Mueller even compared the idea to the old keywords meta tag, essentially something a site owner claims about their content but that systems can easily verify directly by reading the page itself.

For now, that likely explains the short answer Mueller gave: save your energy.