My Self-Hosted AI & Automation Stack: From Idea to Publication, Fully Controlled

8 min read

I’ve always been drawn to control – control over my data, my costs, and my tools. So when the AI boom hit, and I saw subscription fees for various SaaS platforms stacking up, I decided to forge a different path. My goal? A completely self-hosted stack where everything runs on my own infrastructure.

This isn’t just a technical setup; it’s my personal productivity system for managing content, SEO, and automation. It’s all driven from Slack, and nothing leaves my own Hetzner server. It gives me ultimate flexibility, cost efficiency, and peace of mind regarding my data.

This setup is part of my coverage of AI agents and LLM integration — showing how to build intelligent automation with full control. For more on infrastructure, see my Coolify setup guide and explore my complete portfolio.

The Foundation: Coolify – My Personal PaaS

First, I needed a rock-solid base. Coolify became my go-to self-hosted PaaS (Platform as a Service) – a powerful, open-source alternative to Heroku or Vercel that I fully control. Here, all my services run as Docker containers, complete with automatic SSL, domain management, and super-easy deployments. It simplifies adding new services, backing them up, and updating them.

  • What it is: A self-hosted PaaS that streamlines managing Docker containers, domains, SSL, and deployments.
  • My role for it: Every service in my stack below runs as a Coolify “resource” on the same server. It’s the brain that keeps everything organized and running smoothly.

The AI Brain: LiteLLM – My Unified LLM Gateway

Dealing with multiple AI provider APIs can be a headache. Different endpoints, different keys, different rate limits. That’s where LiteLLM comes in.

  • What it is: An open-source proxy that speaks the OpenAI-compatible API (/v1/chat/completions) but smartly routes requests to any LLM provider – OpenAI, Anthropic, Google Gemini, DeepSeek, you name it.
  • My role for it: LiteLLM acts as my single gateway for all Large Language Model (LLM) calls. Instead of configuring each service (OpenClaw, n8n, Langflow) with specific provider keys and endpoints, they all point to LiteLLM. If I want to switch models or add a new provider, I just update LiteLLM – no changes needed anywhere else in my stack.
  • Bonus: I love that I can set spending limits, log usage, and monitor costs per request directly from the LiteLLM UI. It’s fantastic for budgeting and understanding my AI consumption.

The Agent: OpenClaw – My SEO & Content Strategist

This is where the magic happens for content creation. OpenClaw is a brilliant, open-source AI agent designed to interact with users via Slack (or other interfaces) and execute tasks using various tools.

  • What it is: An adaptable AI agent that can run tools (read files, execute scripts, search memory) and interact with me.
  • My role for it: I’ve configured OpenClaw as my dedicated SEO and site agent. It handles everything from planning content and suggesting articles (based on my site analytics from Umami) to writing initial drafts and, crucially, publishing them. The entire process is driven from simple Slack commands like “article suggestions for emilingemarkarlsson” or “publish [slug]”.
  • Technical link: OpenClaw points to LiteLLM. I set OPENAI_API_BASE to my LiteLLM URL and OPENAI_API_KEY to my LiteLLM master key. This means both its chat interactions and tool usage (like memory search) leverage the same unified AI gateway.

The Automator: n8n – My Workflow Orchestrator

For anything beyond the agent’s core capabilities, I turn to n8n.

  • What it is: A visual, node-based workflow automation engine, similar to Zapier or Make, but entirely self-hosted.
  • My role for it: n8n is perfect for scheduled or event-triggered flows. For example, I use it to fetch data from Umami, format it, perhaps send it to LiteLLM for a summary, and then post it to Slack. While OpenClaw handles the main content logic, n8n complements it by acting as a data feeder or a bridge to other services. For instance, a daily Umami summary can be delivered via n8n.

Related reading: Integrate APIs with n8n and n8n vs Zapier vs Make.

Other Essential Services (All in the Same Coolify Project)

To complete the picture, I run a few other open-source services alongside my core stack:

  • Umami: My privacy-focused analytics tool, giving me insights into visits and top pages. OpenClaw uses this data to prioritize content suggestions.
  • MinIO: An S3-compatible object storage for files and backups.
  • Langflow: A visual AI flow builder that also integrates with LiteLLM for LLM calls.
  • Open WebUI: A friendly chat interface to LiteLLM for manual, ad-hoc conversations outside of Slack.

What I’ve Implemented for SEO & Content Automation

This stack isn’t just about components; it’s about the integrated system I’ve built for content creation:

  1. One Agent, Multiple Sites: My OpenClaw agent uses a single instruction set (SEO-SITE-AGENT) that can apply to any of my sites. I simply specify the site in the command (e.g., “article suggestions for emilingemarkarlsson”), and it pulls up the relevant plan (language, content pillars, gaps).
  2. Slack as the Control Panel: I wanted a natural, conversational interface. Every agent reply ends with “Next steps:” and clear instructions (e.g., “Type: publish my-article-slug”). Approving or rejecting drafts is as simple as typing a number or “Approve.”
  3. Draft → GitHub → Netlify (Automated Publishing): Drafts are saved within the OpenClaw container. When I type “publish [slug],” the agent runs a script that reads the draft and metadata, finds the correct GitHub repository (defined in site-repos.json), commits, and pushes the changes. Netlify then automatically builds and deploys the site. No manual copy-pasting, ever.
  4. Cost-Free Reminders: To keep the content flow consistent without incurring AI costs, a simple cron job on my server (e.g., daily at 09:00) sends a direct message to Slack via an Incoming Webhook: “Time to think about a new article?” Simple, effective, and free.
  5. Security First: All sensitive data – API keys, GitHub tokens, webhook URLs – are securely stored as Coolify Environment Variables or in server-side files, never in my Git repository. My installation script only syncs agent files, plans, and necessary scripts into the OpenClaw container.

Why This Specific Combination Works for Me

  • LiteLLM in the Middle: This is the linchpin. It provides a single, consistent API endpoint for all my LLM calls. Switching models or adding new providers (like DeepSeek or Gemini) only requires a quick config change in LiteLLM, not in every individual service that uses an LLM.
  • OpenClaw for “Agent Work”: Tasks like SEO planning, keyword strategy, brief generation, writing, and step-by-step publishing require a stateful agent that can use tools and guide me through “Next steps.” OpenClaw excels at this, making it a better fit than trying to build complex, interconnected n8n flows for every nuanced content task.
  • n8n for Complementary Automation: While OpenClaw handles the core agent logic, n8n remains invaluable for time-based or event-driven automations. It’s perfect for “every morning, fetch X and send it to Slack/OpenClaw” type scenarios, or for integrating services that OpenClaw doesn’t directly talk to.
  • Coolify for Simplicity: Managing multiple services and containers manually is tedious. Coolify provides a centralized, user-friendly interface to deploy, view logs, set environment variables, and update everything without needing to SSH into the server and manually mess with docker-compose.

A Quick Peek at My SEO Article Workflow

Here’s how effortlessly an article goes from an idea to published content:

  1. Reminder (cron): My Slack gets a message: “Want article suggestions?”
  2. My Input: I type: “article suggestions for emilingemarkarlsson”.
  3. OpenClaw (via LiteLLM): The agent analyzes my content plan and Umami analytics, then sends 3-5 tailored suggestions to Slack, always ending with “Next steps: Type 1-5 or produce article for … about …”.
  4. My Choice: I reply, perhaps “2” or “Approve.” The agent then generates a detailed brief, followed by a full draft.
  5. Publish!: I type “publish [slug]”. The agent executes the publishing script, pushing the content to GitHub.
  6. Netlify Deploys: Netlify automatically picks up the changes and deploys the new article.
  7. Confirmation: The agent confirms in Slack with a direct link to my newly published article.

Technical Notes for the Curious (or Those Who Want to Replicate)

  • OpenClaw ↔ LiteLLM Connection: In Coolify, for your OpenClaw container, set:
    • OPENAI_API_BASE = Your LiteLLM URL (e.g., https://litellm.yourdomain.com/v1)
    • OPENAI_API_KEY = Your LiteLLM master key This ensures both OpenClaw’s chat functionality and its tool-based operations (like memory search) correctly use LiteLLM. If you encounter an “Invalid OpenAI API key” error, double-check your LiteLLM key and ensure no budget or spend limits in LiteLLM are rejecting requests.
  • Adding a New Site: To integrate a new website, simply add it to site-repos.json (specifying the repo, contentPath, and domain). Then, create a plan file under openclaw/agents/plans/ for that site. Finally, run your install script and test it with “article suggestions for [new site]” in Slack.
  • Documentation in My Repo: I keep detailed documentation within my setup’s repo: OPENCLAW-OVERBLICK.md (overview + troubleshooting), GIT-SETUP.md (token, publish guide), SNABBKOMMANDON.md (Slack commands), and SETUP-SEO-OPENCLAW.md (full setup guide).

Conclusion: Why This Stack is My Go-To

I run a self-hosted stack with Coolify as the base, LiteLLM as the single AI gateway, OpenClaw as my intelligent agent for SEO and content, and n8n for flexible workflow automation. Everything is seamlessly driven from Slack – from getting article suggestions and approving drafts to publishing content. The integration with GitHub and Netlify ensures automatic deployment.

This setup grants me unparalleled control over my data and costs, providing a clear, automated path from a nascent idea to a fully published article. It’s efficient, personal, and exactly the kind of robust system I love to build and use.

Tools Used in This Article

This article mentions several tools from my tech stack.