Skip to content

Commit f2d8673

Browse files
cahlenclaude
andcommitted
docs: link to Hugging Face model repo (cahlen/keeloq-neural-distinguishers)
- README.md: new "Pre-trained distinguishers on Hugging Face" section with the checkpoint availability table and one-liner `hf download` example. - CLAUDE.md: checkpoint policy rewritten to describe the git-+-HF dual mirror and the commands future sessions should run when producing a new checkpoint (train, evaluate, upload with the metric summary). Hub URL: https://huggingface.co/cahlen/keeloq-neural-distinguishers d64.pt is live (val acc 0.752, ROC-AUC 0.828); d96/d128 land when training finishes. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1 parent 1bac3db commit f2d8673

2 files changed

Lines changed: 34 additions & 2 deletions

File tree

CLAUDE.md

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,11 +57,26 @@ The core nonlinear function is shared across cipher, ANF generator, and GPU ciph
5757

5858
On the CLI these are `--diff-pair` and `--sat-pair` respectively (both repeatable). Conflating them is a category error; early drafts of the test suite hit this and the fix was to split the API.
5959

60-
**Checkpoint policy.** `checkpoints/` is not committed by default. Produce a checkpoint with:
60+
**Checkpoint policy.** Large binary checkpoints are *not* committed to this git repo (they would bloat clones). Instead they are:
6161

62+
1. Committed to the git repo anyway when small (d64.pt is 11.8 MB — fine). Whether to commit larger checkpoints is a case-by-case call; the HF mirror is always authoritative.
63+
2. Mirrored to Hugging Face at [`cahlen/keeloq-neural-distinguishers`](https://huggingface.co/cahlen/keeloq-neural-distinguishers) with a model card covering training config, eval metrics, architecture, and attack procedure.
64+
65+
**When you produce a new checkpoint**, update both locations:
66+
67+
# Train
6268
uv run keeloq neural auto --rounds 64 --trained-depth 56 \
6369
--samples 10000000 --pairs 512 --checkpoint-out checkpoints/d64.pt
6470

71+
# Evaluate (appends to the per-depth JSON report)
72+
uv run keeloq neural evaluate --checkpoint checkpoints/d64.pt \
73+
--rounds 56 --samples 1000000 --seed 4242 \
74+
> docs/phase3b-results/eval_d64.json
75+
76+
# Upload to HF (update the README.md table in the HF repo with the new metrics)
77+
hf upload cahlen/keeloq-neural-distinguishers checkpoints/d64.pt d64.pt \
78+
--commit-message "d64.pt: <summary of result>"
79+
6580
The regression test `tests/test_neural_e2e_64r.py` auto-skips when `checkpoints/d64.pt` is absent. The benchmark runner (`benchmarks/bench_neural.py`) reports `SKIP_MISSING_CHECKPOINT` rather than crashing on missing checkpoints — it's smoke-safe.
6681

6782
## Red flags when editing

README.md

Lines changed: 18 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,12 @@ End-to-end, one command (Δ search + train + attack a synthetic target):
5050
--samples 10000000 --pairs 512 \
5151
--checkpoint-out checkpoints/d64.pt
5252

53-
~30 minutes training time on an RTX 5090. Afterwards, attack any new ciphertext under the same key:
53+
~60 minutes training time on an RTX 5090. Or skip training and pull a published checkpoint from Hugging Face:
54+
55+
mkdir -p checkpoints
56+
hf download cahlen/keeloq-neural-distinguishers d64.pt --local-dir checkpoints/
57+
58+
Then attack any new ciphertext under the same key:
5459

5560
uv run keeloq neural recover-key --checkpoint checkpoints/d64.pt \
5661
--rounds 64 --diff-pair "<c0>:<c1>" --sat-pair "<pt>:<ct>" \
@@ -64,6 +69,18 @@ A ResNet-1D-CNN distinguisher is trained at a fixed depth **D** (e.g. 56) to sep
6469

6570
**Key-schedule constraint.** KeeLoq's key cycles every 64 rounds; at fewer than 64 rounds, bits `K_rounds..K_63` are never referenced and can't be recovered without being hinted. Attacks below 64 rounds therefore auto-populate `extra_key_hints` for the unconstrained range — handled automatically by `keeloq neural auto`.
6671

72+
### Pre-trained distinguishers on Hugging Face
73+
74+
Checkpoints are published at [**cahlen/keeloq-neural-distinguishers**](https://huggingface.co/cahlen/keeloq-neural-distinguishers) with a full model card covering training config, eval metrics, architecture, and attack procedure. Current availability:
75+
76+
| File | Trained Depth | Attack Target | Val Accuracy | ROC-AUC |
77+
|---|---:|---:|---:|---:|
78+
| `d64.pt` | 56 | 64 rounds | 0.752 | 0.828 |
79+
| `d96.pt` | 88 | 96 rounds | (coming) | (coming) |
80+
| `d128.pt` | 120 | 128 rounds | (coming) | (coming) |
81+
82+
Each `.pt` file embeds its full `TrainingConfig`, so results are reproducible from seed alone.
83+
6784
## Pipeline composition via Unix pipes
6885

6986
The algebraic pipeline is also exposed as discrete stages with JSON on stdin/stdout:

0 commit comments

Comments
 (0)