⚠️ This project is no longer actively maintained (as of April 2026).I am not using OpenClaw myself anymore, so I won't be shipping new fixes or features here going forward. The code as of v0.4.2 is stable and works for the setups it was tested against, but new issues and pull requests will not be actively worked on from my side.
Want to keep it going? The repo stays public on purpose — feel free to fork it and take it wherever you need. @RetiredWizard's fork already contains interesting improvements (single-session conversation handling, extra context loading on the first prompt) and is a good starting point if you want to build on something.
Thanks to everyone who used, tested, and reported issues on this project 🙏
Turn your OpenClaw agent into a Home Assistant voice assistant.
This fork keeps the standard POST /v1/chat/completions integration flow, but also forwards stable Home Assistant identity metadata when available:
conversation_iduser_iddevice_idlanguagelocal_date
The goal is to let OpenClaw apply its own routing, session, and context policy with better identity continuity, without breaking existing installations that already use the stock chat completions endpoint.
Say a wake word, ask a question, get a spoken answer — powered by your own OpenClaw agent with all its tools, memory, and personality.
"Hey Nabu" → Whisper STT → OpenClaw Agent → Piper TTS → Speaker
- Your full OpenClaw agent as a HA conversation agent
- Voice control through HA Voice PE, phone app, or browser
- Works with any STT/TTS engine (Whisper, Piper, HA Cloud...)
- Simple setup: just point it at your OpenClaw Gateway
- Stable HA identity metadata forwarding for backends that support identity-aware routing or session handling
- Keeps using the standard OpenClaw endpoint:
/v1/chat/completions - Preserves backward compatibility for existing users
- Forwards
conversation_id,user_id,device_id,language, andlocal_dateas extra request fields - Does not force any new session policy in the Home Assistant plugin
- Lets the OpenClaw backend decide how identity continuity should be used
- Open HACS in Home Assistant
- Click the 3 dots menu > Custom repositories
- Add
nicolasglg/openclaw-conversationas Integration - Search for and install OpenClaw Conversation
- Restart Home Assistant
- Go to Settings > Integrations > Add Integration > OpenClaw Conversation
Copy custom_components/openclaw_conversation into your HA config/custom_components/ directory and restart.
Settings > Devices & Services > Add Integration > OpenClaw Conversation
| Field | Value |
|---|---|
| Name | Display name (e.g. "OpenClaw") |
| Gateway URL | http://<gateway-ip>:<port> (e.g. http://192.168.1.100:18789) |
| API Token | Your gateway auth token |
| Model | openclaw:main (default) — must match a model that exists on your gateway |
| Timeout | 30 seconds |
Settings > Voice Assistants > create or edit an assistant:
- Conversation agent: select OpenClaw
- Speech-to-Text: Whisper, Faster Whisper, or HA Cloud
- Text-to-Speech: Piper, Google Translate, or HA Cloud
- Wake word: e.g. "Ok Nabu" via openWakeWord
For HA Voice PE or other satellites: set Preferred Assistant to your OpenClaw assistant in the device settings.
Say the wake word and speak, or use Voice Assistants > Start a conversation to test via text.
- OpenClaw Gateway with Chat Completions endpoint enabled
- Home Assistant 2024.1+
- HACS installed
Add this to your openclaw.json inside the gateway block:
{
"gateway": {
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
}
}Restart your gateway after the change.
This fork still uses the normal chat completions endpoint, but it also forwards extra Home Assistant request metadata when available:
conversation_iduser_iddevice_idlanguagelocal_date
OpenClaw backends that understand these fields can use them for routing or session continuity. Backends that ignore unknown fields should continue to work as before.
- HA must reach your OpenClaw Gateway over HTTP
- If they're on different machines, use the gateway's LAN IP (not
127.0.0.1) - Open port
18789(default) if needed - Docker users:
127.0.0.1refers to the container, use the host's LAN IP instead
| Engine | Speed | Notes |
|---|---|---|
| HA Cloud | Fast | Requires subscription |
| Faster Whisper (Wyoming) | Good | Separate machine with decent CPU/GPU |
| Whisper (local add-on) | Slow on weak HW | Not ideal on HA Green / Pi |
| Engine | Quality | Notes |
|---|---|---|
| Piper (local) | Good, natural | Lightweight, runs anywhere |
| HA Cloud | Excellent | Requires subscription |
| Google Translate TTS | Decent | Needs internet |
Tip: On HA Green or Raspberry Pi, local Whisper will be slow. Use Faster Whisper on a separate machine or HA Cloud.
| Problem | Fix |
|---|---|
| Cannot connect to gateway | Check URL: curl http://<ip>:<port>/v1/chat/completions. Check firewall. Don't use 127.0.0.1 across machines. |
| Model not available | The model name you configured does not exist on your OpenClaw Gateway. Try openclaw:main or list the models your gateway exposes. |
| Endpoint disabled (405) | Enable chatCompletions in openclaw.json, restart gateway |
| Invalid auth (401) | Check token. Ensure gateway.auth.mode is "token" |
No response from OpenClaw / empty stream / data: [DONE] |
The gateway opened the stream but never produced a response before closing it. Usually a timeout on the gateway side. Add agents.defaults.llm.idleTimeoutSeconds: 180 to your openclaw.json and restart the gateway. |
| Red flashing light (Voice PE) | STT failed — check your STT engine config |
| Agent not in dropdown | Restart HA after installing. Check logs for errors |
- Response latency: Full pipeline (STT > LLM > TTS) takes a few seconds. Local Whisper on low-powered devices adds delay.
- No continuous conversation: Wake word needed after each response (HA pipeline limitation).
- No audio streaming: Responses are fully generated before being spoken.
Like it? Found it useful?
- OpenClaw — AI assistant framework
- OpenClaw Documentation
- Home Assistant Voice
- HACS
MIT