Integration Guide

Protect Your Codebase from OpenAI Codex

Codex CLI analyzes full repository context. Pretense ensures every identifier in that context is mutated before leaving your machine.

under 10msadded latency
Local-firstnothing leaves your machine
5 minsetup time
Byte-exactreversal guaranteed

How to Protect OpenAI Codex in 3 Steps

One environment variable. No code changes. No workflow disruption.

01

Start Pretense

Run pretense start on port 9339. Configure with pretense init for your project.

02

Intercept Codex

Prepend OPENAI_BASE_URL=http://localhost:9339 to your codex commands.

03

Full Coverage

All Codex requests are mutated. File names, function names, class names -- all replaced with synthetics.

What OpenAI Codex Actually Sends to the LLM

Pretense transforms your real identifiers into synthetic tokens before transmission. The LLM sees structure and logic, not your proprietary names.

Without Pretense: raw identifiers exposed
// Before Pretense mutation
async function getUserPaymentToken(userId: string) {
  return await stripeClient.createToken(userId, STRIPE_SECRET);
}

Your function names, secrets, and architecture are transmitted verbatim.

With Pretense: synthetic identifiers only
// After Pretense mutation (what the LLM sees)
async function _fn4a2b(_v8c3d: string) {
  return await _v2f1a._fn9e4b(_v8c3d, _v7d2c);
}

Structure intact. LLM quality preserved. Your IP stays private.

After the LLM responds, Pretense reverses every mutation. You receive real, working code with your original identifiers restored byte-for-byte.

Quick Setup for OpenAI Codex

terminal
$ OPENAI_BASE_URL=http://localhost:9339 codex
$ pretense init
$ pretense start
NOTE

Prefix codex commands with OPENAI_BASE_URL=http://localhost:9339

Frequently Asked Questions

Does Pretense work with Codex CLI sandbox mode?

Yes. Pretense intercepts at the API layer, not the CLI layer. All modes are covered.

What languages does mutation cover?

TypeScript, JavaScript, Python, Go, and Java. Pretense scans token-level identifiers across all supported languages.

How does Pretense handle large repository context?

Mutation is in-memory and sub-millisecond per identifier. Large codebases add under 50ms of total overhead.

Protect OpenAI Codex in 5 Minutes

No code changes. No workflow disruption. One environment variable and Pretense intercepts every request.

No credit card required. 30-second setup. Local-first, nothing leaves your machine.

Ask me anything