Safety notes
Verified with notes
Routes model calls through Claude Code CLI; useful for subscription-backed experiments.
Routes model calls through Claude Code CLI; useful for subscription-backed experiments.
Static scan findings
This is a first-pass static screen, not a formal audit. It flags patterns worth reading before install.
MEDIUM · token_access
index.ts
maxTokens: model.maxTokens,apiKey: "unused",
MEDIUM · token_access
README.md
- Streams text, thinking, and tool call tokens in real-time
MEDIUM · token_access
tests/stream-parser.test.ts
message: { usage: { input_tokens: 10 } },
MEDIUM · token_access
tests/event-bridge.test.ts
maxTokens: 8192,input_tokens: 100,output_tokens: 0,
MEDIUM · spawn_shell
tests/provider.test.ts
// Mock child_process.execSync for validateCliPresence/validateCliAuthvi.mock("node:child_process", () => ({execSync: vi.fn(() => Buffer.from("1.0.0")),
MEDIUM · token_access
tests/provider.test.ts
maxTokens: 8192,maxTokens: 16384,expect(config.apiKey).toBe("unused");
MEDIUM · spawn_shell
tests/process-manager.test.ts
import type { ChildProcess } from "node:child_process";// Mock child_process.execSync for validation testsvi.mock("node:child_process", () => ({
MEDIUM · token_access
.planning/ROADMAP.md
- [x] **Phase 3: Extended Thinking and Usage** - Bridge thinking token events and usage metrics with configurable thinking budgets (completed 2026-03-14)**Goal**: Pi receives thinking token streams and usage metrics, with configurable thinking effort per model2. After each response, pi receives accurate usage metrics (input tokens, output tokens, cache tokens)
MEDIUM · token_access
.planning/STATE.md
- [Phase 03]: Used --effort levels instead of --thinking-budget tokens (CLI does not support --thinking-budget) (03-01)
MEDIUM · token_access
.planning/REQUIREMENTS.md
- [x] **PROV-02**: Provider exposes all current Claude models derived from `getModels("anthropic")` with correct context windows, max tokens, and cost info- [x] **STRM-05**: Extension tracks and reports usage metrics (input tokens, output tokens, cache tokens) from `message_start` and `message_delta` events- **PERF-01**: Persistent subprocess sessions to avoid ~12s startup overhead per request and O(n^2) token growth from replaying full conversation history each turn
Package scripts captured
package.json
{
"test": "vitest run --reporter=verbose",
"test:coverage": "vitest run --coverage",
"typecheck": "tsc --noEmit",
"lint": "eslint .",
"format:check": "prettier --check .",
"prepare": "husky"
}