🤖 Making Darkwood Agent-Ready
on February 12, 2026
Websites are no longer consumed only by humans.
They are parsed, summarized, classified and reasoned about by AI agents.
Search engines were the first wave. LLM-powered agents are the second.
Recently, I decided to audit darkwood.com and apply concrete improvements to make it more understandable and more explicit for AI systems.
This article documents what changed and why.
Why Optimize for AI Agents?
Traditional SEO focuses on ranking signals.
AI agents operate differently.
They:
- Parse structure
- Extract intent
- Follow links as decision paths
- Classify page types
- Evaluate confidence
If your website is ambiguous, an agent’s understanding becomes probabilistic and fragile.
The objective was simple:
Reduce ambiguity. Increase explicitness.
Initial Observation: Homepage Ambiguity
Darkwood historically combines multiple dimensions:
- A technical blog
- Personal projects
- A game/community space
For humans, this ecosystem makes sense.
For AI systems landing on the homepage, however, the dominant visual signals (login, play, chat, rank) can bias interpretation toward a gaming portal rather than a technical knowledge hub.
This is not “wrong”. But it is imprecise.
Precision matters when agents decide how to classify and reuse information.
Step 1 — Adding llms.txt
The first improvement was introducing a root-level llms.txt file.
This file acts as a lightweight guidance layer for AI systems.
It explicitly defines:
- What Darkwood is
- Where high-signal content lives
- What agents are allowed to do
- What they should avoid
- Suggested crawl behavior
Instead of forcing agents to infer structure through exploration, we provide a minimal contract.
Example excerpt:
## Primary entry points (recommended)
- Blog: https://blog.darkwood.com/
- Profile: https://hello.darkwood.com/
## What to avoid
- Do not attempt login/register
- Do not submit forms
- Avoid aggressive crawling
This small addition significantly reduces ambiguity and prevents unintended interaction with dynamic or authenticated areas.
It also makes the blog immediately discoverable as the primary high-value content surface.
Step 2 — Implementing Structured Data (JSON-LD)
The most important content type on Darkwood is the blog article.
To make this explicit, each article page now exposes a BlogPosting Schema.org JSON-LD block.
This includes:
headlinedescriptiondatePublisheddateModifiedinLanguageauthorpublishermainEntityOfPage(canonical URL)
Example:
{
"@context": "https://schema.org",
"@type": "BlogPosting",
"headline": "Making Darkwood Agent-Ready",
"datePublished": "2026-02-12T00:00:00+01:00",
"inLanguage": "en",
"author": {
"@type": "Person",
"name": "Mathieu Ledru",
"url": "https://hello.darkwood.com/"
},
"publisher": {
"@type": "Organization",
"name": "Darkwood",
"url": "https://darkwood.com/"
}
}
Why this matters:
- Agents no longer need to guess page type
- Publication metadata becomes reliable
- Canonical alignment is explicit
- Language context is clear
Structured data increases confidence in automated reasoning systems.
It reduces interpretative entropy.
What Changed Technically
The implementation required:
- Adding a static
llms.txtfile at the domain root - Ensuring correct
text/plaindelivery - Injecting JSON-LD in article templates
- Aligning canonical URLs
- Standardizing ISO-8601 date formatting
No new dependencies were introduced. No UI changes were required.
The modifications are minimal, but structural.
Why This Matters
Optimizing for AI agents is not about chasing trends.
It is about:
- Declaring intent clearly
- Reducing ambiguity
- Making content machine-interpretable
- Explicitly defining boundaries
Websites are becoming part of larger automated ecosystems.
Clarity is no longer optional.
Implementation Details
You can review the exact implementation in the corresponding pull request:
👉 https://github.com/darkwood-com/darkwood-com/pull/106