I Built a System That Publishes 30 Articles a Week Across 7 Websites for $0.08 Each
In March 2026, my content system published 47 articles across 7 websites. My LLM bill for the month was $3.76.
That works out to roughly $0.08 per article — including keyword research, generation, quality scoring, and publishing.
This is the breakdown of how that system works, what it took to build, and what I’d do differently.
What is OpenClaw?
OpenClaw is a name I gave to the collection of automation workflows, scripts, and infrastructure that runs my content operation. It’s not a product — it’s a stack that I assembled and operate myself.
The brief:
- Run keyword research daily per site
- Generate full articles automatically
- Apply a basic quality gate
- Publish to GitHub-backed Astro/React sites
- Ping IndexNow for fast indexing
- Never require me to touch it
It runs on a single Hetzner VPS (CAX11, €3.99/month) and costs under €15/month all-in.
The Seven Sites
The system currently operates across these sites:
| Site | Niche | Generator |
|---|---|---|
| thehockeyanalytics.com | NHL data analysis | NHL API → Groq |
| thehockeybrain.com | Hockey insights | NHL API → Gemini |
| theunnamedroads.com | Travel/culture | Groq topic pipeline |
| emilingemarkarlsson.com | Personal/automation | Groq + manual |
| theagentfabric.com | AI agents | Groq topic pipeline |
| theprintroute.com | Printing/design | Groq topic pipeline |
| tan-website.netlify.app | Outdoors/adventure | Groq topic pipeline |
Each site has its own keyword research workflow and article generator, but they share infrastructure: the same Hetzner server, the same n8n instance, the same LiteLLM proxy.
The Stack
Hetzner VPS
└── Coolify (container management)
├── n8n — orchestration (23 active workflows)
├── LiteLLM — AI proxy + cost tracking
├── Minio — S3-compatible storage for drafts + data
├── Umami — privacy-friendly analytics
├── Listmonk — newsletter
└── OpenClaw webhook server (port 9191)
n8n is the core. Every workflow — from daily keyword research to Telegram notifications — runs in n8n. I run it self-hosted via Coolify on the same VPS.
LiteLLM sits in front of all LLM calls. Every request from every workflow goes through it, regardless of which provider it hits. This gives me a single dashboard for costs, rate limiting, and provider switching without changing any workflow code.
Minio stores intermediate data between workflow runs. Keyword research runs at 08:00 and saves approved-keywords.json to Minio. The article generator runs at 10:00 and reads that file. No database tables, no API keys between workflows — just files.
Coolify manages all containers with automatic HTTPS via Traefik. Deploying a new service is a UI action. I don’t write Docker Compose by hand anymore.
How an Article Gets Published
Here’s the full pipeline for a THA (The Hockey Analytics) article:
09:45 tha-keyword-research
→ Fetch NHL standings via API
→ Load last week's performance insights from Minio
→ Groq: score 8 topic candidates, pick top 3
→ Save approved-keywords.json to Minio
→ Telegram: "Research done, 3 keywords ready"
10:00 tha-seogenerator
→ Read approved-keywords.json from Minio
→ Groq: generate full article (title, body, metadata)
→ Quality gate: score on depth, specificity, uniqueness (0–100)
→ If score ≥ 65: auto-publish
→ If score < 65: skip, notify Telegram
→ On publish: POST to GitHub API → Netlify builds → IndexNow ping
→ Telegram: "Published: [title]"
Total time from trigger to live URL: about 90 seconds.
For EIK articles, there’s a manual approval step. The workflow sends a Telegram message with inline buttons: Approve / Reject. I tap Approve, it publishes. This is intentional — EIK is my personal brand and I want to vet everything there.
The Cost Breakdown
Here’s where the money actually goes:
LLM inference (via Groq):
- Keyword research: ~$0.002/run (small prompt, fast model)
- Article generation: ~$0.04–0.06/article (llama-3.3-70b, ~2000 tokens output)
- Quality scoring: ~$0.002/article
- Total per article: ~$0.05–0.08
Infrastructure:
- Hetzner CAX11: €3.99/month (ARM, 2 vCPU, 4GB RAM)
- Netlify: free tier (7 sites, auto-deploys from GitHub)
- Minio: self-hosted on the VPS, no extra cost
- Umami: self-hosted, no extra cost
- Domain registrations: ~€60/year total
Total monthly running cost: ~€15–20, depending on article volume.
The server is the main fixed cost, not the AI. LLM costs scale with usage but stay low because Groq is cheap for open-source models.
What I Got Wrong the First Time
1. Parallel fan-out in n8n
My first architecture had a “fan-out” pattern: one trigger node connected to four parallel article generators. This seemed clean but caused silent failures. If one branch produced an empty result, n8n would stop the chain without an error. I spent hours debugging before I understood the pattern.
Fix: sequential chains. Each node passes data to the next in a single line. No branches that merge back together.
2. Hardcoding credentials in workflow expressions
n8n has a $credentials syntax for accessing stored credentials inside Code node expressions. It doesn’t work reliably when used in HTTP Request body fields. I lost two hours to this.
Fix: hardcode the values or use n8n’s built-in credential system at the node level (not in custom expressions).
3. Backtick template literals in jsonBody
n8n’s expression engine doesn’t support ES6 template literals in jsonBody parameters. Writing:
jsonBody: `{"text": "Hello ${name}"}`
fails silently or throws a parser error. String concatenation works:
jsonBody: '{"text": "Hello ' + name + '"}'
4. Quality without quantity doesn’t scale
Early on I set the quality threshold too high (80/100) and the system published almost nothing. The articles it rejected were often fine — just not matching my over-fitted scoring rubric. I lowered it to 65 and accepted that some articles would be mediocre. They still drive traffic and don’t embarrass me.
What the Numbers Look Like
After about 4 months of running this at scale:
- emilingemarkarlsson.com: 2,178 pageviews in March (up from 158 in February)
- thehockeyanalytics.com: 86 pageviews in March (stable, niche topic)
- AI referrer traffic (Perplexity, ChatGPT, Bing AI): growing each week — this is what I’m tracking most carefully
The EIK traffic spike isn’t from OpenClaw directly — those articles are manually approved. It’s from the SEO infrastructure (schema markup, IndexNow, structured data) working in combination with consistent publishing.
What This Is Good For
Automated publishing at this cost makes sense for:
- Topic coverage at scale — a site that needs 200 articles on a niche topic to be taken seriously can generate a foundation quickly
- Trend-responsive content — the NHL-API-driven sites publish standings analysis the morning after games
- Maintaining freshness signals — search engines and AI crawlers reward recently updated content
It is not a substitute for authoritative, experience-based writing. The best-performing articles on EIK are ones I wrote myself. The automated ones fill in the long-tail.
The Part I’m Still Figuring Out
AI referrer traffic is real and growing, but it’s hard to measure. My Umami setup tracks referrers, and I can see sessions from perplexity.ai, chatgpt.com, and Bing AI. But I don’t know which articles are being cited or why.
I’ve built a weekly AI traffic monitor that saves snapshots to Minio and tracks week-over-week trends. It’s early data. The hypothesis is that well-structured schema markup (FAQ, HowTo, Article with real author details) drives more citations than article volume alone.
I’ll write a follow-up once I have three months of clean data.
If You Want to Build Something Like This
The stack I described is not a product you can buy. It’s a collection of decisions you have to make and debug yourself. The n8n workflows alone took about 40 hours to get stable.
If you want to build a similar system for your own content operation — or adapt parts of it for a business context — that’s something I consult on. The architecture decisions, the pitfalls I listed above, and the cost/quality trade-offs are non-obvious the first time through.
Get in touch if you want to talk through what makes sense for your situation.
The workflows, scripts, and infrastructure described in this article are tracked in a private GitHub repository. I’ll be open-sourcing components as they mature.