0.14.0 - 2026-04-16
run_agui_stream()signature:toolsetis now optional and keyword-only. Theadapterargument moves to first position. Callers should update fromrun_agui_stream(toolset, adapter)torun_agui_stream(adapter, toolset=toolset). WhentoolsetisNone, the stream yields adapter events without skill event merging.
0.13.3 - 2026-04-14
0.13.2 - 2026-04-08
Skill.reconfigure()now copiesinstructionsandresources: Previously,reconfigure()silently droppedinstructionsandresourcesfrom the factory-produced skill, causing factory-generated instructions (e.g. config-dependent preambles) to be lost after reconfiguration.
0.13.1 - 2026-04-07
- ActivitySnapshotEvent timestamps: Events now carry a millisecond timestamp set at creation time. Previously, events were emitted with
timestamp=Noneand stamped downstream in a batch, causing all events from a skill sub-agent run to share the same timestamp. Events are also now converted eagerly as they arrive rather than batch-converted after the skill finishes.
0.13.0 - 2026-03-27
SkillRunDepsProtocol: New@runtime_checkable Protocolthat formalizes the contract for skill sub-agent deps (state+emit). The defaultSkillRunDepsdataclass satisfies it, and so does any subclass.deps_typeonSkill: Skills can declare adeps_type— any class satisfyingSkillRunDepsProtocol. When set, the sub-agent is created withdeps_type(state=state, emit=emit)instead of the defaultSkillRunDeps. This enables skills to integrate external toolsets that require additional context on the deps object.- Sandbox skill (
haiku-skills-sandbox): New skill package for Docker-based Python execution viapydantic-ai-backend. Runs code in an isolated container with pre-installed data science packages (pandas, numpy, scipy, matplotlib) and host filesystem access. Features idle timeout with automatic container cleanup, session-bound containers via AG-UI state, and configurable workspace mounting.
0.12.0 - 2026-03-27
SkillsCapability: New pydantic-ai capability wrappingSkillToolset+ system prompt. Provides a single-line integration path viaAgent(capabilities=[SkillsCapability(...)]).SkillToolsetremains available for advanced use cases.- Skill thinking configuration: Skills can specify
thinkingeffort level (True,'low','medium','high', etc.) to configure reasoning on their sub-agents. Supported across providers via pydantic-ai's unified thinking setting. - Skill extras: Skills can carry arbitrary non-tool data via
extras: dict[str, Any]. Useful for exposing utility functions or other resources that the consuming app needs but that aren't agent tools.
- Script timeout:
run_scriptnow enforces a timeout (default 120s, configurable viaHAIKU_SKILLS_SCRIPT_TIMEOUTenv var). Previously a hanging script would block the agent forever.
- Bump pydantic-ai dependency from
>=1.63.0to>=1.71.0. - AG-UI state restoration now uses pydantic-ai's
for_run()hook instead of overridingget_tools().
0.11.0 - 2026-03-26
- Skill reconfiguration: Entrypoint skills can be reconfigured after discovery via
skill.reconfigure(**kwargs). The stored factory is re-invoked with the given arguments, replacing tools, state, and model while preserving metadata and identity. This allows consuming apps to override factory parameters (e.g. config, database path) without bypassing entry point discovery.
0.10.0 - 2026-03-24
- Optional sub-agent delegation:
SkillToolset(use_subagents=False)exposes skill tools directly to the main agent viaquery_skill,execute_skill_tool,run_skill_script, andread_skill_resource— bypassing sub-agent LLM loops for lower latency and cost. Default (use_subagents=True) preserves existing behavior. --no-subagentsCLI flag:haiku-skills chat --no-subagentsruns the TUI in direct mode.- Comprehensive integration tests: VCR-recorded tests exercising all tool types (execute_skill, query_skill, execute_skill_tool, read_skill_resource, run_skill_script) across both execution modes (subagent/direct) and skill sources (entrypoint/filesystem), with AG-UI event and state assertions.
execute_skill_toolreturns raw values: Tool results are passed through as-is instead of being JSON-serialized, consistent with pydantic-ai'sToolReturnContentsupport.
- Activity snapshot
message_idnow stable: Result snapshots share the samemessage_idas their corresponding call snapshot, so AG-UI frontends update activities in place instead of showing duplicates. Call snapshots usereplace=False(create), result snapshots usereplace=True(update). - Chat TUI preserves full message history: Tool calls and their results are now retained across turns via pydantic-ai message history, so the LLM no longer re-invokes tools for information it already retrieved.
0.9.1 - 2026-03-20
0.9.0 - 2026-03-20
- Spec-compliant skill directory layout: Scripts now live alongside
SKILL.md(e.g.web/scripts/search.py) instead of in a separate package-levelscripts/dir - Skill directory renames: Renamed
code-execution→codeexecution,image-generation→imagegeneration(skill dirs are now Python packages, which require valid identifiers) - Named CLI flags for scripts: All scripts use
argparsewith--flag valuesyntax and support--help.script_tools.pypasses named args instead of positional - Gmail extracted into standalone scripts: Auth, helpers, and all 8 operations (search, read, send, reply, draft, list drafts, modify labels, list labels) are now standalone scripts with argparse CLI interfaces.
__init__.pyis a thin wrapper with state tracking - SKILL.md script documentation: All SKILL.md files now document available scripts with CLI flags and descriptions
- CI signature verification:
validate-skillsworkflow now verifies skill signatures (integrity-only) - Documentation reorganized: Replaced
quickstart.md,skill-sources.md, andexamples.mdwith a single progressivetutorial.md. Cleanedskills.mdinto a pure reference page. Removed duplicated state section fromag-ui.md
- Skill signing and verification: Identity-based signing via sigstore. Sign skills with
sign_skill(), verify withTrustedIdentityon registry/discovery. Install withuv pip install "haiku.skills[signing]" haiku-skills signcommand: Sign a skill directory via CLI with browser-based OIDC or ambient CI credentialshaiku-skills verifycommand: Verify a signed skill against trusted identities (--identity/--issuer) or check cryptographic integrity only (--unsafe)
0.8.1 - 2026-03-17
- Custom event emission from skill tools:
SkillRunDepsnow has anemitcallback that skill tools can use to emit AG-UIBaseEventsubclasses (e.g.CustomEvent) during execution. Events are flushed through the event sink at tool-call boundaries (real-time path) or returned inToolReturn.metadata(batched path).
- code-execution skill: Rewritten from sync fd-dup hack to async
run_monty_async, exposingawait llm(prompt)as an external function so sandbox code can make one-shot LLM calls for per-item reasoning (classify, summarize, extract) in loops - Gmail skill (
haiku-skills-gmail): Search, read, send, reply, draft, and label Gmail emails via the Google Gmail API with OAuth2 authentication - Notifications skill (
haiku-skills-notifications): Send and receive push notifications via ntfy.sh — withsend_notificationandread_notificationstools, per-skill state tracking, self-hosted server support, and optional bearer token authentication
- Graphiti memory skill (
haiku-skills-graphiti-memory): Removed the knowledge graph memory skill and all associated code, tests, and configuration
0.8.0 - 2026-03-13
- code-execution skill: Updated pydantic-monty to >=0.0.8, rewritten SKILL.md sandbox limitations to reflect new capabilities (math, re, os.environ, getattr, dataclass methods, PEP 448 unpacking)
- Sub-agent tool events emitted as
ActivitySnapshotEventinstead ofToolCall*events, fixing AG-UI history replay crashes in conforming clients (CopilotKit/soliplex)
0.7.5 - 2026-03-12
_events_to_aguicrash onRetryPromptPart: HandleRetryPromptPartresults inFunctionToolResultEventby calling.model_response()instead of.model_response_str()which doesn't exist on retry parts (#35)
0.7.4 - 2026-03-06
- Main agent prompt: Emphasize that skills are isolated agents with no shared context — the main agent must include concrete data when chaining skills and must synthesize skill responses for the user
0.7.3 - 2026-03-06
0.7.2 - 2026-03-06
- Missing
openaiextra in core dependency:pydantic-ai-slim[mcp]→pydantic-ai-slim[mcp,openai]— most users hitImportError: Please install openaion first use - CLI unusable without
[tui]extra:typerandpython-dotenvare now lazy-loaded with a friendly error message instead of crashing withModuleNotFoundError
0.7.1 - 2026-03-06
- Independent skill package publishing: Skill packages (
haiku-skills-web, etc.) can now be published to PyPI independently from the core package usingskills-v*release tags (#27) - Bump script updates skill packages:
bump_version.pynow updates version andhaiku.skills>=dependency constraint in allskills/*/pyproject.tomlfiles - Skill package PyPI metadata: All 4 skill packages now include authors, license, readme, keywords, classifiers, and project URLs
- Skill package READMEs:
haiku-skills-web,haiku-skills-image-generation, andhaiku-skills-code-executionnow have READMEs with prerequisites, configuration, tools, and installation instructions
- Missing core dependencies:
ag-ui-protocolandjsonpatchmoved from optional[ag-ui]extra to core dependencies — a clean install ofhaiku.skillsno longer fails withModuleNotFoundError: No module named 'ag_ui' - graphiti-memory recall returns empty results: Switch
recall()andforget()fromclient.search()toclient.search_()with BM25 + cosine + BFS graph traversal, RRF reranking, andsim_min_score=0.0so cosine always returns candidates for BFS to expand on - graphiti-memory cross-encoder crash:
_build_cross_encoder()now passes anAsyncOpenAIclient directly toOpenAIRerankerClientinstead of the graphitiOpenAIGenericClientwrapper, which lacked the.chatattribute the reranker needs
generate_imagereturns file path: The image generation tool now returns the file path directly instead of a markdown image reference- Main agent prompt: Instructs the agent to present skill results exactly as returned, without fabricating or rewriting content
0.7.0 - 2026-03-04
discover_from_pathscollects all validation errors: Returnstuple[list[Skill], list[SkillValidationError]]instead of raising on the first broken skill — valid skills are still loaded while errors are collected (#25)SkillRegistry.discoverreturns errors: Returnslist[SkillValidationError]instead ofNone, propagating errors fromdiscover_from_paths- CLI prints discovery warnings:
listandchatcommands print validation errors as warnings to stderr instead of aborting
SkillValidationError:ValueErrorsubclass with a.pathattribute, exported fromhaiku.skillsStateMetadata: Frozen dataclass withnamespace,type, andschemafields, exported fromhaiku.skillsSkill.state_metadata(): Returns aStateMetadatafor skills that declare state;Noneotherwise
0.6.0 - 2026-03-03
- Real-time sub-agent event streaming:
run_agui_stream()merges main-agent and sub-agent AG-UI events into a single stream, so sub-agent tool calls (search, fetch, etc.) appear in real-time instead of batching untilexecute_skillreturns
- Sub-agent output:
_run_skillnow returns the model's final response (result.output) instead of the last tool's raw return value — state and structured data are already handled via the snapshot/delta mechanism - Event sink on
SkillToolset:_run_skillaccepts an optionalevent_sinkcallback; when active, sub-agent tool events stream through the sink immediately rather than collecting in batch SkillRunDepssimplified: Removed_collected_eventsfield — event collection is now closure-based inside_run_skill
0.5.2 - 2026-03-02
- Graphiti memory skill (
haiku-skills-graphiti-memory): Store, recall, and forget memories using a knowledge graph powered by Graphiti and FalkorDB — with per-skill state tracking
SkillMetadata.allowed_toolsaccepts strings: Now accepts bothstr(space-separated) andlist[str]as input, always storeslist[str]— eliminates conversion overhead for consumers using the spec's string format (#19)Skill.modelacceptsModelinstances: Widened fromstr | Nonetostr | Model | Noneso consumers can pass configured model objects directly (#20)discover_from_pathsaccepts single-skill directories: Paths that containSKILL.mddirectly are now treated as skill directories, in addition to parent directories containing skill subdirectories. Dot-directories are skipped during child iteration.
- Ollama base URL handling:
resolve_model()now appends/v1toOLLAMA_BASE_URLinstead of expecting it in the env var, consistent with Ollama's convention - Web skill
fetch_pagefor non-HTML content: Pages with non-HTML content types (e.g. plain text, markdown) are now returned directly instead of failing with "could not extract content"
0.5.1 - 2026-02-27
build_system_prompt()utility: Standalone function to build the main agent system prompt from a skill catalog, with optional custom preamble — replacesSkillToolset.system_promptproperty
- Entrypoint skill priority: Skills passed via
skills=now take priority over entrypoint-discovered skills — entrypoints with the same name are silently skipped instead of raising a duplicate error - Sub-agent request limit: Increased from 10 to 20 to allow skills with more complex tool chains to complete
- Chat TUI tool call display: Tool call widgets now stream argument updates and show richer descriptions (e.g.
execute_skill → web: search for ...)
SkillToolset.system_prompt: Usebuild_system_prompt(toolset.skill_catalog)instead
0.5.0 - 2026-02-25
skill_modelparameter:SkillToolsetacceptsskill_modelto set the model for skill sub-agents (also available as--skill-modelCLI option)resolve_model(): Resolves model strings with transparentollama:prefix handling (defaults tohttp://127.0.0.1:11434whenOLLAMA_BASE_URLis unset)run_scripttool: Skill sub-agents can execute scripts from the skill'sscripts/directory via arun_scripttool, supporting.py,.sh,.js,.ts, and generic executables with path validation- JS/TS script support:
run_scriptdispatches.jsfiles vianodeand.tsfiles vianpx tsx; extensible viaSCRIPT_RUNNERSmapping
- Script tool execution: Scripts are now invoked with CLI positional arguments (
sys.argv+print()) instead of JSON on stdin/stdout, matching standard CLI conventions and enabling compatibility with external skill scripts - Resilient script discovery:
discover_script_tools()now skips scripts without amain()function (with a warning) instead of crashing
- Script failure error reporting: Script error messages now include stdout when stderr is empty, so usage messages and other stdout-based errors are visible to the sub-agent
- Script sibling imports:
run_scriptand typed script tools now setPYTHONPATHto the skill directory so scripts can use package-style imports (e.g.from scripts.utils import ...)
0.4.2 - 2026-02-20
SkillDeps: Minimal dataclass satisfying pydantic-ai'sStateHandlerprotocol for type-correct AG-UI state round-tripping (replacesStateDeps[dict[str, Any]]recommendation in docs)
0.4.1 - 2026-02-20
0.4.1 - 2026-02-20
- AG-UI state restoration:
SkillToolsetnow restores skill namespace state from frontend-provideddeps.stateon each AG-UI request, so state survives server restarts
- RAG skill package (
haiku-skills-rag): Moved to haiku.rag
0.4.0 - 2026-02-19
haiku-skills validatecommand: Validate skill directories against the Agent Skills specification usingskills-ref- Unknown frontmatter rejection:
SkillMetadatanow rejects unknown fields (extra="forbid") skills-refdependency: Reference implementation used for spec-compliant validation
- Distributable skill directory layout: SKILL.md moved into a subdirectory matching the skill name (e.g.
haiku_skills_web/web/SKILL.md) so all bundled skills pass directory-name validation
0.3.0 - 2026-02-19
haiku-skills listcommand: List discovered skills with name and description, supports-s/--skill-pathand--use-entrypoints--skill/-koption forchat: Filter which skills to activate by name (repeatable)- RAG skill package (
haiku-skills-rag): Search, retrieve and analyze documents via haiku.rag with tools for hybrid search, document listing/retrieval, QA with citations, and code-execution analysis - Web skill package (
haiku-skills-web): Web search via Brave Search API and page content extraction via trafilatura (replaceshaiku-skills-brave-search) - Per-skill state: Skills can declare a
state_type(PydanticBaseModel) andstate_namespace; state is passed to tool functions viaRunContext[SkillRunDeps]and tracked per namespace on the toolset - AG-UI protocol:
SkillToolsetemitsStateDeltaEvent(JSON Patch) when skill execution changes state, compatible with the AG-UI protocol - State API on
SkillToolset:build_state_snapshot(),restore_state_snapshot(),get_namespace(),state_schemas - In-process tools with state: Distributable skills (web, image-generation, code-execution, rag) converted from script-based to in-process tool functions that can read and write per-skill state
- Skills fully loaded at discovery: Instructions, script tools, and resources are loaded when skills are discovered, removing the separate activation step
- Chat TUI rewritten as AG-UI client: Uses
AGUIAdapterevent stream instead of polling; inline state delta display and a "View state" modal via the command palette - Skill name validation: Now accepts unicode lowercase alphanumeric characters per the Agent Skills specification (previously ASCII-only)
- Documentation site: Published at ggozad.github.io/haiku.skills with MkDocs Material
- Brave Search skill package (
haiku-skills-brave-search): Replaced byhaiku-skills-web SkillRegistry.activate(): Skills are fully loaded at discovery time; progressive disclosure removedTask/TaskStatus: Task tracking removed fromSkillToolset; the AG-UI adapter provides tool call progress via events
0.1.0 - 2026-02-16
- Core framework: Skill-powered AI agents implementing the Agent Skills specification with pydantic-ai
- Skill model: Pydantic v2 models for skills, metadata, and tasks with full validation
- SKILL.md parser: YAML frontmatter + markdown body parsing following the Agent Skills spec
- Skill discovery: Filesystem scanning (directories containing SKILL.md) and Python entrypoint-based plugin discovery
- SkillRegistry: Central registry for skill discovery, loading, lookup, and activation
- Progressive disclosure: Three-level progressive disclosure — metadata at startup, instructions on activation, resources on demand
- Sub-agent delegation: Each skill runs in a focused sub-agent with its own system prompt and tools via
execute_skill - SkillToolset:
FunctionToolsetintegration that exposes skills as tools for any pydantic-aiAgent - Script tools: Python scripts in
scripts/withmain()function get AST-parsed into typed pydantic-aiToolobjects with automatic parameter schema extraction - Resource reading: Skills can expose files (references, assets, templates) as resources; sub-agents read them on demand via
read_resourcetool with path validation and traversal defense - MCP integration:
skill_from_mcp()maps MCP servers directly to skills - Chat TUI: Terminal-based chat interface using Textual
- Distributable skill packages: Workspace members for brave-search, image-generation, and code-execution skills