AI Coding Assistants Send Your Code Everywhere
GitHub Copilot, Cursor, Claude Code, and every AI coding tool transmit your codebase to third-party model providers. Your function names, algorithms, and business logic are part of that transmission. Pretense stops that without breaking your workflow.
The Problem
Why existing controls do not address AI coding tool risk for ai coding tools teams.
Every AI suggestion starts with a data transfer
When a developer asks Copilot to complete a function, the entire surrounding context goes to Microsoft Azure. When they ask Claude Code to refactor a class, that class goes to Anthropic. This happens silently, continuously, with no audit log and no opt-out that does not also disable the tool.
Proprietary identifiers are the IP
The function name getUserPaymentToken does not appear in any open-source repository. It describes your system architecture. When it enters an LLM training pipeline, it is there permanently. Mutations cannot happen after the fact.
Developers will not stop using AI tools
Banning AI coding assistants is not a policy that holds. Developers use them from personal machines, from home, from CI runners. The only durable security control is one that works in the request path, transparently, without requiring developer behavior change.
How Pretense Solves It
Proxy-layer protection with no workflow disruption
Pretense runs as a local HTTP proxy on port 9339. Setting one environment variable routes all AI tool traffic through it. Developers launch Cursor, Copilot, or Claude Code exactly as they always have. Pretense handles mutation transparently.
Mutation preserves AI quality
Identifiers are replaced with deterministic synthetic tokens. getUserPaymentToken becomes _fn4a2b. The LLM receives complete, coherent code with correct syntax and structure. Suggestions are just as relevant. After the AI responds, Pretense reverses every mutation. Developers see real variable names.
Secrets blocked at the edge
Pretense runs 30+ secret detection patterns before every request. API keys, connection strings, private keys, and PII are blocked with a clear error. Nothing sensitive reaches the AI provider.
Complete audit log for every AI request
Every request is logged: provider, timestamp, mutation count, request hash, and response status. Logs are stored locally in SQLite WAL mode. Export as PDF or JSON for SOC2 audits or internal compliance reviews.
Compliance Coverage
Pretense generates audit evidence and compliance documentation for the frameworks that matter to ai coding tools teams.
Audit log exports for Type II controls
Nothing stored on Pretense servers
All processing happens on your machine
Auditable mutation engine on GitHub
What the LLM Actually Sees
Pretense transforms proprietary identifiers into synthetic tokens before transmission. Structure and logic are preserved. Your IP is not.
// Sent to LLM provider verbatim
async function fetchPatientMedicalHistory(
patientId: string,
includeSSN: boolean
) {
return await ehrClient.getRecord(
patientId, ENCRYPTION_KEY
);
}// Pretense-mutated before transmission
async function _fn4a2b(
_v8c3d: string,
_v2f1a: boolean
) {
return await _v9e4b._fn7d2c(
_v8c3d, _v6b1a
);
}After the LLM responds, Pretense reverses every mutation. You receive real, working code with your original identifiers restored byte-for-byte.
Frequently Asked Questions
Does Pretense work with all AI coding assistants?
Yes. Pretense intercepts at the HTTP API layer. Any tool that uses OpenAI-compatible API endpoints or Anthropic API endpoints routes through Pretense automatically when you set the base URL environment variable.
Will AI suggestions be less useful after mutation?
No. Mutation preserves code structure, syntax, and semantic relationships between identifiers. LLMs reason about patterns and structure, not variable names. In internal testing, suggestion quality is indistinguishable from unprotected requests.
How do I enforce adoption across my team?
Pretense includes a CI/CD integration that can block builds when unprotected AI API calls are detected. Same enforcement path as linters and formatters. Once in the pipeline, developers adopt it without manual coordination.
What happens when a developer uses a personal AI subscription?
Pretense works regardless of which account or subscription the AI tool uses. Protection is at the network layer, not the authentication layer. Team policy can mandate proxy configuration for all development environments.
Explore More Use Cases
Protect your ai coding tools team in 30 seconds
One environment variable. No code changes. No workflow disruption. Pretense intercepts every AI API request from day one.
No credit card required. Free tier available. Local-first, nothing leaves your machine.