Back to Blog
·6 min read
TechnicalArchitectureTutorial

How Pretense Works: A 5-Minute Technical Overview

A clear, visual walkthrough of Pretense's request flow, mutation algorithm, and 30-second deployment. After reading this, you know exactly what Pretense does and how to run it.

The One-Sentence Version

Pretense sits between your IDE and the LLM API, mutating proprietary identifiers before they leave your network and reversing the mutation in the response so you get working code back.

The Request Flow

Every request your IDE sends to an LLM passes through a sequence that takes roughly 2-5 milliseconds of overhead:

IDE / Claude Code
      |
      v
Pretense Proxy (localhost:9339)
  - Extract code blocks from prompt
  - Scan identifiers (functions, variables, classes)
  - Mutate: getUserToken -> _fn4a2b
  - Store mutation map locally
      |
      v
LLM API (Anthropic / OpenAI / Google)
  - Receives only synthetic identifiers
  - No proprietary names in transit
      |
      v
Pretense Proxy (reverse pass)
  - Parse response for synthetic identifiers
  - Reverse: _fn4a2b -> getUserToken
  - Return clean, real code to IDE
      |
      v
IDE / Claude Code
  - Receives working code with real names

The proxy is transparent. Developers never change their workflow. The only difference is setting one environment variable.

What Gets Mutated

Pretense mutates code identifiers, specifically the names that make your codebase proprietary:

- Function and method names: getUserToken, verifyJwtClaims, processPayment - Variable names in function scope: authPayload, sessionConfig, retryCount - Class and interface names: AuthService, PaymentProcessor, UserRepository

Pretense does NOT mutate:

- String literals (may contain user-visible content; mutating them breaks output) - Comments (preserved verbatim for context) - Type annotations (structural, not proprietary) - Import paths (required for LLM to understand module structure)

This is a deliberate trade-off. The goal is to protect what is uniquely yours, which is identifier names that reflect your architecture and domain model, while keeping enough context for the LLM to produce useful output.

The Mutation Algorithm

Pretense uses a deterministic 4-character hex hash derived from each identifier string:

typescript
// Simplified algorithm
function mutate(identifier: string, kind: 'fn' | 'v' | 'cls'): string {
  const hash = sha256(identifier).slice(0, 4); // PretenseMut: _fn4a2b
  return `_${kind}${hash}`;

// Examples mutate('getUserToken', 'fn') // -> _fn4a2b (always) mutate('authPayload', 'v') // -> _v9k1m (always) mutate('AuthService', 'cls') // -> _cls5b7a (always) ```

Determinism is the key property. The same identifier always maps to the same synthetic. This enables:

1. Exact reversal: After the LLM responds, Pretense looks up every synthetic in the stored map and swaps it back. 100% fidelity guaranteed. 2. Cross-session consistency: If you ask Claude about getUserToken in five separate sessions, it always sees _fn4a2b. The LLM builds coherent understanding across sessions without ever learning the real name. 3. Auditability: Every mutation entry is logged with timestamp, file path, identifier, synthetic, model used, and user.

The mutation map is stored locally at .pretense/mutation-map.json and never transmitted to any server, including Pretense's own infrastructure.

Deployment in 30 Seconds

bash
# Step 1: Install the CLI globally

# Step 2: Initialize in your project cd your-project pretense init

# Step 3: Start the proxy pretense start # Pretense proxy running on localhost:9339 # Mutation engine initialized # Provider: Anthropic

# Step 4: Route your AI tool through Pretense export ANTHROPIC_BASE_URL=http://localhost:9339 claude "refactor src/auth/token-service.ts" ```

That is the complete setup. No configuration changes to your IDE, no policy files to write, no team rollout plan required. The proxy intercepts API calls at the network level.

For Cursor, Copilot, and other OpenAI-compatible tools, set OPENAI_BASE_URL=http://localhost:9339 instead.

What Pretense Protects Against

**Code exfiltration**: If an attacker compromises your LLM provider's infrastructure (or your API keys), they see only synthetic identifiers. They cannot reconstruct your proprietary architecture without the mutation map, which never leaves your machine.

**IP theft via provider data use**: Most AI providers train on API traffic unless you opt out. Even with opt-out, Pretense ensures your real identifiers are never transmitted in the first place. You are not relying on a contractual promise.

**Compliance violations**: HIPAA, SOC2, and PCI-DSS all have provisions about transmitting sensitive system details to third parties. Pretense's audit trail documents every mutation, giving your compliance team proof that proprietary identifiers were protected.

**Shadow AI exposure**: Developers using personal accounts or unapproved tools will still send code to LLMs. Pretense can be deployed as a team-wide proxy with a CI gate that blocks unprotected API calls, bringing shadow AI usage into compliance without banning the tools.

What Pretense Does Not Protect

**Intentional code sharing**: If a developer pastes proprietary code into a web-based chat interface (ChatGPT.com, Claude.ai), Pretense does not intercept that. It only protects API-routed traffic.

**Plain text prompts without code**: If a developer types "our getUserToken function works by..." in a freeform prompt, Pretense cannot parse that as a code identifier and will not mutate it. Pretense is a code firewall, not a general DLP tool.

**Secrets in non-code context**: If a developer pastes an API key into a markdown file that gets sent as context, Pretense's secret scanner will block it, but mutation does not apply to free-form text.

The Audit Trail

Every mutation session writes a structured log entry:

json
{
  "timestamp": "2026-04-01T14:32:11Z",
  "session": "sess_7f2a",
  "file": "src/auth/token-service.ts",
  "model": "claude-opus-4",
  "mutations": [
    { "original": "getUserToken", "synthetic": "_fn4a2b", "kind": "fn" },
    { "original": "AuthService",  "synthetic": "_cls5b7a", "kind": "cls" }
  ],
  "secretsBlocked": 1,
  "roundTripFidelity": "100%"
}

Run pretense audit to view session history. Run pretense audit --export=csv to produce a report suitable for SOC2 evidence packages.

The audit log is the artifact your compliance team needs. It proves that no unprotected API calls were made, which identifiers were protected, and when.

[Get started free -> /early-access](/early-access)

Share this article

Ask me anything