Skip to content

Commit 58c3095

Browse files
committed
chore(llm): drop unused inputTokens/outputTokens from ChatResult (review #11)
The `chatFn` adapter wired in `runtime-container.ts` returned hardcoded zero token counts for every call. `LLMProvider.chat()` returns `Promise<string>` (no usage), so threading real counts here would require widening that interface across every adapter. Nothing in the `FirstMentionService` path actually consumed the fields — they only existed to satisfy the local `ChatResult` shape — so dropping them is strictly safer than leaving misleading zeros in place. Per-call cost telemetry continues to flow from `LLMProvider.chat` -> `writeCostEvent` unchanged. Updated: - `ChatResult` in `first-mention-service.ts` -> `{ text: string }` only, with a comment documenting the deliberate decision. - `runtime-container.ts` adapter no longer fabricates zero usage. - `first-mention-service.test.ts` fixture updated to match.
1 parent ba0db16 commit 58c3095

3 files changed

Lines changed: 18 additions & 10 deletions

File tree

src/app/runtime-container.ts

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -227,7 +227,12 @@ export function createCoreRuntime(deps: CoreRuntimeDeps): CoreRuntime {
227227
],
228228
{ maxTokens },
229229
);
230-
return { text, inputTokens: 0, outputTokens: 0 };
230+
// Token usage is intentionally NOT returned here: `LLMProvider.chat`
231+
// emits per-call cost telemetry via `writeCostEvent` internally
232+
// (see `src/services/llm.ts`). Surfacing zeros at this seam invited
233+
// the bug the prior reviewer caught — readers would treat them as
234+
// real counts. Drop the field instead until usage is plumbed.
235+
return { text };
231236
},
232237
);
233238

src/services/__tests__/first-mention-service.test.ts

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,6 @@ import { FirstMentionService } from '../first-mention-service.js';
1414

1515
interface ChatResult {
1616
text: string;
17-
inputTokens: number;
18-
outputTokens: number;
1917
}
2018

2119
function makeRepo() {
@@ -28,11 +26,7 @@ function makeRepo() {
2826

2927
function chatReturning(text: string) {
3028
return vi.fn(
31-
async (): Promise<ChatResult> => ({
32-
text,
33-
inputTokens: 100,
34-
outputTokens: 50,
35-
}),
29+
async (): Promise<ChatResult> => ({ text }),
3630
);
3731
}
3832

src/services/first-mention-service.ts

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,19 @@ import {
2121
type FirstMentionEvent,
2222
} from '../db/repository-first-mentions.js';
2323

24+
/**
25+
* Minimal chat-call shape for first-mention extraction.
26+
*
27+
* Token usage is intentionally not threaded here: cost telemetry for the
28+
* underlying LLM call is already emitted from `LLMProvider.chat` (see
29+
* `src/services/llm.ts` -> `writeCostEvent`). Reading per-call usage at
30+
* this layer would require widening the LLMProvider.chat return type to
31+
* include usage and plumbing it through every adapter; until something in
32+
* the FirstMentionService path actually consumes it, the extra surface
33+
* area would only invite hardcoded zeros that mislead downstream readers.
34+
*/
2435
interface ChatResult {
2536
text: string;
26-
inputTokens: number;
27-
outputTokens: number;
2837
}
2938

3039
type ChatFn = (

0 commit comments

Comments
 (0)