Skip to content

nicolasglg/openclaw-conversation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenClaw Conversation for Home Assistant

⚠️ This project is no longer actively maintained (as of April 2026).

I am not using OpenClaw myself anymore, so I won't be shipping new fixes or features here going forward. The code as of v0.4.2 is stable and works for the setups it was tested against, but new issues and pull requests will not be actively worked on from my side.

Want to keep it going? The repo stays public on purpose — feel free to fork it and take it wherever you need. @RetiredWizard's fork already contains interesting improvements (single-session conversation handling, extra context loading on the first prompt) and is a good starting point if you want to build on something.

Thanks to everyone who used, tested, and reported issues on this project 🙏

HACS GitHub Release License Buy Me A Beer

Turn your OpenClaw agent into a Home Assistant voice assistant.

This fork keeps the standard POST /v1/chat/completions integration flow, but also forwards stable Home Assistant identity metadata when available:

  • conversation_id
  • user_id
  • device_id
  • language
  • local_date

The goal is to let OpenClaw apply its own routing, session, and context policy with better identity continuity, without breaking existing installations that already use the stock chat completions endpoint.

Say a wake word, ask a question, get a spoken answer — powered by your own OpenClaw agent with all its tools, memory, and personality.

"Hey Nabu" → Whisper STT → OpenClaw Agent → Piper TTS → Speaker

What you get

  • Your full OpenClaw agent as a HA conversation agent
  • Voice control through HA Voice PE, phone app, or browser
  • Works with any STT/TTS engine (Whisper, Piper, HA Cloud...)
  • Simple setup: just point it at your OpenClaw Gateway
  • Stable HA identity metadata forwarding for backends that support identity-aware routing or session handling

What this fork changes

  • Keeps using the standard OpenClaw endpoint: /v1/chat/completions
  • Preserves backward compatibility for existing users
  • Forwards conversation_id, user_id, device_id, language, and local_date as extra request fields
  • Does not force any new session policy in the Home Assistant plugin
  • Lets the OpenClaw backend decide how identity continuity should be used

Installation

HACS (recommended)

  1. Open HACS in Home Assistant
  2. Click the 3 dots menu > Custom repositories
  3. Add nicolasglg/openclaw-conversation as Integration
  4. Search for and install OpenClaw Conversation
  5. Restart Home Assistant
  6. Go to Settings > Integrations > Add Integration > OpenClaw Conversation

Manual

Copy custom_components/openclaw_conversation into your HA config/custom_components/ directory and restart.

Configuration

1. Add the integration

Settings > Devices & Services > Add Integration > OpenClaw Conversation

Field Value
Name Display name (e.g. "OpenClaw")
Gateway URL http://<gateway-ip>:<port> (e.g. http://192.168.1.100:18789)
API Token Your gateway auth token
Model openclaw:main (default) — must match a model that exists on your gateway
Timeout 30 seconds

2. Set up a Voice Assistant

Settings > Voice Assistants > create or edit an assistant:

  • Conversation agent: select OpenClaw
  • Speech-to-Text: Whisper, Faster Whisper, or HA Cloud
  • Text-to-Speech: Piper, Google Translate, or HA Cloud
  • Wake word: e.g. "Ok Nabu" via openWakeWord

3. Assign to a voice device

For HA Voice PE or other satellites: set Preferred Assistant to your OpenClaw assistant in the device settings.

4. Test it

Say the wake word and speak, or use Voice Assistants > Start a conversation to test via text.

Prerequisites

  • OpenClaw Gateway with Chat Completions endpoint enabled
  • Home Assistant 2024.1+
  • HACS installed

Enable Chat Completions on your gateway

Add this to your openclaw.json inside the gateway block:

{
  "gateway": {
    "http": {
      "endpoints": {
        "chatCompletions": { "enabled": true }
      }
    }
  }
}

Restart your gateway after the change.

Identity metadata

This fork still uses the normal chat completions endpoint, but it also forwards extra Home Assistant request metadata when available:

  • conversation_id
  • user_id
  • device_id
  • language
  • local_date

OpenClaw backends that understand these fields can use them for routing or session continuity. Backends that ignore unknown fields should continue to work as before.

Network notes

  • HA must reach your OpenClaw Gateway over HTTP
  • If they're on different machines, use the gateway's LAN IP (not 127.0.0.1)
  • Open port 18789 (default) if needed
  • Docker users: 127.0.0.1 refers to the container, use the host's LAN IP instead

STT / TTS recommendations

Speech-to-Text

Engine Speed Notes
HA Cloud Fast Requires subscription
Faster Whisper (Wyoming) Good Separate machine with decent CPU/GPU
Whisper (local add-on) Slow on weak HW Not ideal on HA Green / Pi

Text-to-Speech

Engine Quality Notes
Piper (local) Good, natural Lightweight, runs anywhere
HA Cloud Excellent Requires subscription
Google Translate TTS Decent Needs internet

Tip: On HA Green or Raspberry Pi, local Whisper will be slow. Use Faster Whisper on a separate machine or HA Cloud.

Troubleshooting

Problem Fix
Cannot connect to gateway Check URL: curl http://<ip>:<port>/v1/chat/completions. Check firewall. Don't use 127.0.0.1 across machines.
Model not available The model name you configured does not exist on your OpenClaw Gateway. Try openclaw:main or list the models your gateway exposes.
Endpoint disabled (405) Enable chatCompletions in openclaw.json, restart gateway
Invalid auth (401) Check token. Ensure gateway.auth.mode is "token"
No response from OpenClaw / empty stream / data: [DONE] The gateway opened the stream but never produced a response before closing it. Usually a timeout on the gateway side. Add agents.defaults.llm.idleTimeoutSeconds: 180 to your openclaw.json and restart the gateway.
Red flashing light (Voice PE) STT failed — check your STT engine config
Agent not in dropdown Restart HA after installing. Check logs for errors

Known limitations

  • Response latency: Full pipeline (STT > LLM > TTS) takes a few seconds. Local Whisper on low-powered devices adds delay.
  • No continuous conversation: Wake word needed after each response (HA pipeline limitation).
  • No audio streaming: Responses are fully generated before being spoken.

Support the project

Like it? Found it useful?

Buy Me A Beer

Links

License

MIT

About

openclaw x HomeAssistant Preview

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages