GooseOps

OpenClaw in Three Live Streams

Published on February 19, 2026

If you’ve ever heard the buzz around OpenClaw (formerly ClawdBot or MoltBot), you’ll know it’s the wild card on the self‑hosted AI assistant playground. I set out to get it running in a personal “home‑cloud” VM, hook it up to a local language model and a local Matrix server, and test the limits of its documentation and security posture. The result? A marathon of trial‑and‑error that was a great lesson in open‑source humility, tool‑chain gymnastics, and the stubborn reality that “local” can be misleading.

Below is a distilled recap of the three live‑stream experiments, the pain points I hit, the work‑arounds that eventually made things run, and the best‑practice notes that I hope will save you a few hours of frustration.


What is OpenClaw and Why the Hype?

  • OpenClaw is an open‑source AI assistant framework that runs on your own hardware and can talk to a variety of LLMs (OpenAI, Claude, local models, etc.).
  • The original project started as “ClawdBot” but was quickly re‑branded to avoid trademark conflict. After a couple of iterations the name stuck as OpenClaw.
  • The hype comes from two main promises:
    1. Self‑hosted AI Agent: keep all secrets on a machine you control.
    2. Tool‑calling and skills: plug in utilities (install packages, edit files, query APIs) with minimal friction.

I was skeptical at first – the docs were full of copy‑paste and “quick‑start” steps that left out the real nitty‑gritty, especially for non‑standard setups. But the project’s early‑adopter community was big enough to keep the hype alive, so I thought, “Why not give it a shot?”


The Install – From Scratch to Working VM

VM Prep

  • Terraform: I spun up a fresh Ubuntu 24.04 VM (8 CPU, 8 GiB RAM, 30 GiB disk) in my home cloud.
  • SSH: The VM had no sensitive data – no passwords, no API keys, no private repos. This was my “sandbox” to run OpenClaw.

Install Script

  • The openclaw install script is a single‑liner that does:
    • Detects if Node.js is installed, and if not, installs Node 22 from the official source.
    • Installs the OpenClaw CLI via npm install -g @openclaw/openclaw-cli.
    • Runs openclaw configure which launches an interactive wizard.

The first thing I learned: the wizard is surprisingly fragile. If you skip or mis‑type an answer, the wizard will just stall – you’ll end up in a state where the config file is missing crucial blocks. I’m sure this will improve over time.

The Documentation Jungle

The docs are still a hodge‑podge. The “quick‑start” page is huge, but it isn’t focused on the exact scenario I wanted (a local LLM, Matrix integration). You need to:

  1. Read the safety section – OpenClaw can run arbitrary commands. Never run it on your personal laptop; always use a dedicated VM.
  2. Find the openclaw.json – it lives in ~/.openclaw/. The wizard writes to this file, but you’ll need to edit it manually for many settings.

Configuring a Local LLM

Why Ollama?

Ollama is a lightweight local LLM API that OpenClaw can talk to. The trick is twofold:

StepWhat to doWhy it matters
Add the providerproviders: { ollama: { base_url: "http://127.0.0.1:8080" } }The wizard never exposes this.
Add the modelmodels: { glm4.7-Flash-Q4_K_M: { provider: "ollama", model: "glm4.7-Flash-Q4_K_M", context: 32000 } }The ID must be the hash that Ollama gives you (ollama list), not the friendly name.

I initially used the friendly name, which broke everything. The fix was to copy the exact hash string from ollama list. A critical learning point: OpenClaw expects the raw ID, not the human‑readable name.

Context Size and VRAM

I was running a 19 GiB model on a 24 GiB GPU, which should be fine. However, Ollama reserves memory equal to the context window you set. If you set the context too high (e.g., 512 k tokens ≈ 50 GiB), the process would stall, eating all swap and RAM. I reduced it to 32 k tokens – just enough for the assistant’s needs without choking the GPU. (I’ve learned since that OpenClaw recommends a minimum of 64k Context Window for starting out.)


The Matrix Plugin – The Final Piece

Installing the Plugin

The Matrix integration is shipped as an NPM package. Installing it via openclaw plugins install matrix sometimes fails because the NPM install needs to be run inside the OpenClaw working directory. The command sequence is:

npm install -g @vector-im/matrix-bot-sdk   # pull the matrix plugin dependencies
openclaw plugins enable matrix

Configuring Channels

Add a Matrix channel under the channels block (make sure to use the actual matrix server details):

{
  "channels": {
    "matrix": {
      "homeserver": "https://matrix.example.com",
      "accessToken": "syt_example_access_token",
      "groupPolicy": "allowlist",
      "groupAllowFrom": [
        "@user:matrix.example.com"
      ],
      "dm": {
        "policy": "allowlist",
        "allowFrom": [
          "@user:matrix.example.com"
        ]
      }
    }
  },
}
  • Access token – obtained via a simple curl request outlined in the OpenClaw docs.
  • Allow lists – you must whitelist yourself in the groupAllowFrom or dm.allowFrom; otherwise the bot will never join the room.

The first time I set this up, I forgot the dm.allowFrom field, so I would invite the bot, but it would never join. After adding the whitelist, the bot joined instantly.

End‑to‑End Encryption?

I experimented with Matrix’s end‑to‑end encryption (E2E) and couldn’t get it to work reliably in a room. It works on a direct message with the bot but only if you never delete that direct message session. However, rooms require the bot to verify the bot’s device – a feature that OpenClaw hasn’t fully implemented yet (or I was just too ignorant of the process to get it to work). If you’re running a private Matrix server, you will have to disable E2EE in the groups that you invite the bot to in order for the bot to join and chat. I do not like this, but it is a self-hosted matrix instance that I’m in complete control of, so I know where the messages live.


Security Realities

“Local” Is Not Always Local

  • The LLM API: If you’re using Ollama or a local model, it’s local. If you use an external API (OpenAI, Claude), your request goes over the internet.
  • The Matrix server: If you host it yourself, the messages stay on your machine. If you use a third‑party Matrix server or other chat app (e.g. WhatsApp, Telegram, Discord, etc.), the chat data goes out to the internet and is therefore, not local.
  • API keys: Any key you give the assistant can end up in the prompt. OpenClaw attempts to keep any secret use to tool calling only and not send it to with the prompt, but you as a consumer and user of this bot can never be certain that those secrets do not get injected into the prompt. Even if you were to monitor the prompt by capturing the requests that it sends before it was encrypted via TLS and discovered that it sent a secret with a prompt, that would be too lat. The secret would already be leaked.

Bottom line: OpenClaw can’t keep the “brain” on your hardware, but it could still expose secrets over the network unless you’re extremely careful with the skills, the model provider and chat app you use.

Run It in a Sandbox

The safety warning in the docs isn’t just fluff: the assistant can run any shell command. I followed the “never run on a machine with secrets” advice and kept it on a clean VM. If you must run it on a machine that does contain secrets, the only safe way is to hard‑enforce a strict allow‑list of tools and commands, or better yet, run the assistant in a separate container with limited privileges.


What I Learned (and What to Share with You)

LessonWhy it matters
Don’t skip the wizard – it’ll leave you with an unfinished config.You’ll spend hours hunting a missing models block or provider.
The ID is the hash from Ollama, not the friendly name.Mis‑configuring this makes the assistant unable to talk to your local model.
Context size directly consumes GPU/ramA 512k context can kill a 24 GiB GPU.
NPM installs need the right directorynpm install -g @vector-im/matrix-bot-sdk
Matrix plugin needs a whitelistThe bot invites you but will never join if you haven’t allowed your own user.
End‑to‑end encryption is a painFor now, best to run a private Matrix room with E2E disabled.
Local = local, except for API callsEven a local assistant can still expose secrets over the network.
You can get a “human‑like” assistant tool‑calling, browsing, editing files all in your own machine.It’s the most “privacy‑first” solution I’ve seen, but you still need to manage secrets properly.
Remove the extra Matrix Directory The matrix directory under ~/.openclaw/extenstions/matrix is unusedThis will fix the duplicate plugin warning

Takeaway

OpenClaw is awesome for a hobbyist looking to own their AI assistant, but it’s a learning curve. The documentation still needs polishing, especially around the non‑standard scenarios (local LLM, Matrix, custom plugins). And as always, the “local” promise has its caveats. Secrets could still travel to external APIs if you’re not using a local LLM, and a Matrix server you don’t host yourself could leaks data to the world. Protect yourself!

If you’re up for a few hours of tinkering and a lot of “I’ve got to write that config manually” moments, this is a great project. It forces you to think through the entire toolchain, from VM provisioning to secure API usage, and in the end you end up with a self‑hosted AI that can run your scripts, edit your files, and browse the web – all on a machine you own.

Happy hacking, and may your VM stay as clean as your office!

Contact Us