A curated detection engineering and SOC automation portfolio built from a live operational system.
SignalFoundry is a distilled reviewer-facing slice of a real, running detection pipeline. The live system — rules, CI, alert poller, triage, escalation, reconciliation — lives in a separate operational repository and runs on a cadence against a real Wazuh deployment. This repo is the flagship narrative layer: architecture, methodology, case studies, and representative detection samples.
It is not a product, not a framework, and not a demo. It is evidence of how one engineer builds and operates detection-as-code and SOC automation with discipline.
| Section | What it contains |
|---|---|
docs/architecture.md |
System shape, components, automation vs. analyst boundaries |
docs/pipeline-overview.md |
Stage-by-stage walk of an alert through the pipeline |
docs/methodology.md |
AI-assisted, human-in-the-loop working philosophy |
docs/detection-principles.md |
How a rule earns its place in the library |
docs/ci-enforcement.md |
How CI gates detection content integrity |
docs/case-studies/ |
Three real incidents with investigation + resolution |
samples/ |
Representative Sigma, Wazuh, and Splunk detection samples |
diagrams/pipeline.mmd |
Mermaid diagram of the AutoSOC pipeline |
flowchart LR
W[Wazuh Manager] --> I[Indexer]
I --> P[poll-alerts]
P --> Q[(File Queue)]
Q --> T[triage]
T --> C[case assembly]
C --> E[escalation PR]
C --> R[reconciliation]
R --> H[heartbeat]
E --> A((Analyst))
A Wazuh manager feeds alerts into an indexer. A poller pulls alerts into a file-backed queue. Triage applies policy rules and known-FP signatures. Case assembly packages artifacts. High-severity cases become escalation pull requests for human review. Reconciliation verifies ledger integrity. Heartbeat records pipeline health.
Used for: drafting and refining detection rules, reasoning about triage policy edge cases, code review on pipeline changes, writing post-incident analyses from evidence, building CI validators.
Not used for: shipping rules without validator + CI gate, making escalation decisions, executing containment, modifying policy logic without human review, generating metrics or claims that aren't traceable to source evidence.
AI is an accelerator subordinate to verification. Every rule passes a validator. Every pipeline change passes tests. Every escalation goes to a human.
Wazuh (manager, indexer, agents) · Sigma · Splunk Enterprise (lab) · GitHub Actions · Python · PowerShell · Sysmon · Windows Security audit logging.
Three case studies derived from real operational events:
- AutoSOC Race Condition — A TOCTOU bug that halted the pipeline for ~7 hours once the queue crossed ~500K files. Diagnosed, guarded at four crash sites, verified under live I/O contention.
- Wazuh Process Telemetry Tuning — ~151K process-creation alerts over 24 hours across the fleet, 97% concentrated on one host. Queue saturation fixed first, noise classified, 28 scoped level-0 suppression rules deployed, ambiguous categories deliberately deferred.
- Sigma UUID Remediation — A hand-authored content validator caught 33 UUIDv4 collision sets across 69 rule files, plus 32 more latent placeholder-UUID files, that the prior file-count check couldn't see. All 101 affected files corrected in one mechanically-verified pass.
This is lab-scale, production-disciplined. It is a single-operator homelab running a real Wazuh fleet, a real Splunk instance, real detection content, and real CI-driven automation. It is not an enterprise SIEM and makes no claim to be one. The design choices — file-backed queue, single-threaded pipeline, GitHub Actions as the orchestrator — are deliberate for scope and reviewability. Every metric in this repo traces back to source evidence or is marked as such.
Raylee Hawkins — detection engineering, SOC automation, security systems design. Portfolio: https://hawkinsops.com
MIT — see LICENSE.