DealForge autonomously sources, scores, and writes investment memos on venture deals. Stop manually hunting.

1,180+ deals tracked  ·  22 AI investment memos  ·  Updated daily

← Back to leaderboard

Kontext CLI

Show HN: Kontext CLI – Credential broker for AI coding agents in Go

68 AI Score
Show_hn other Added Apr 14, 2026

Details

Sector
other
Total Funding
$0
Last Round
$0

About

We built the Kontext CLI because AI coding agents need access to GitHub, Stripe, databases, and dozens of other services — and right now most teams handle this by copy-pasting long-lived API keys into .env files, or the actual chat interface, whilst hoping for the best.<p>The problem isn&#x27;t just secret sprawl. It&#x27;s that there&#x27;s no lineage of access. You don&#x27;t know which developer launched which agent, what it accessed, or whether it should have been allowed to. The moment you hand raw credentials to a process, you&#x27;ve lost the ability to enforce policy, audit access, or rotate without pain. The credential is the authorization, and that&#x27;s fundamentally broken when autonomous agents are making hundreds of API calls per session.<p>Kontext takes a different approach. You declare what credentials a project needs in a .env.kontext file:<p><pre><code> GITHUB_TOKEN={{kontext:github}} STRIPE_KEY={{kontext:stripe}} LINEAR_TOKEN={{kontext:linear}} </code></pre> Then run `kontext start --agent claude`. The CLI authenticates you via OIDC, and for each placeholder: if the service supports OAuth, it exchanges the placeholder for a short-lived access token via RFC 8693 token exchange; for static API keys, the backend injects the credential directly into the agent&#x27;s runtime environment. Either way, secrets exist only in memory during the session — never written to disk on your machine. Every tool call is streamed for audit as the agent runs.<p>The closest analogy is a Security Token Service (STS): you authenticate once, and the backend mints short-lived, scoped credentials on-the-fly — except unlike a classical STS, we hold the upstream secrets, so nothing long-lived ever reaches the agent. The backend holds your OAuth refresh tokens and API keys; the CLI never sees them. It gets back short-lived access tokens scoped to the session.<p>What the CLI captures for every tool call: what the agent tried to do, what happened, whether it was allowed, and who did it — attributed to a user, session, and org.<p>Install with one command: `brew install kontext-dev&#x2F;tap&#x2F;kontext`<p>The CLI is written in Go (~5ms hook overhead per tool call), uses ConnectRPC for backend communication, and stores auth in the system keyring. Works with Claude Code today, Codex support coming soon.<p>We&#x27;re working on server-side policy enforcement next — the infrastructure for allow&#x2F;deny decisions on every tool call is already wired, we just need to close the loop so tool calls can also be rejected.<p>We&#x27;d love feedback on the approach. Especially curious: how are teams handling credential management for AI agents today? Are you just pasting env vars into the agent chat, or have you found something better?<p>GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;kontext-dev&#x2F;kontext-cli" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;kontext-dev&#x2F;kontext-cli</a> Site: <a href="https:&#x2F;&#x2F;kontext.security" rel="nofollow">https:&#x2F;&#x2F;kontext.security</a>

AI Score Reasoning

Kontext addresses a critical and timely security bottleneck for the burgeoning AI agent market by replacing risky long-lived API keys with a brokered, short-lived token system. While the technical execution and market timing are excellent, the project is in its earliest stages with significant platform risk if major LLM providers build native credential management.

Source

Show_hn — View original →