The Developer's Guide to Using AI Coding Tools Without Getting Fired
Most companies prohibit sending proprietary code to external APIs. It is in the employee handbook you did not read. Here is how to use AI tools anyway, pragmatically and safely.
The Unwritten Rule Nobody Told You
There is a section in your employee handbook. It is somewhere between the vacation policy and the acceptable use of company equipment clause. Nobody reads it during onboarding. Nobody flags it at your first sprint planning meeting.
It says something like: "Employees may not transmit proprietary, confidential, or trade secret information to third-party systems without explicit authorization from the information security team."
Your company's code is proprietary. The LLM API running on Anthropic's or OpenAI's servers is a third-party system.
Every time you paste a function from your authentication service into ChatGPT to ask why it is throwing a 403, you are technically in violation of this clause. Every Copilot completion that completes from your production codebase context is sending that context somewhere.
This is not a legal brief. It is just useful to know what the actual rule says, so you can make an informed decision about how to handle it.
What Your Legal Team Would Say If They Knew
Let's do a quick exercise. Imagine the following scenarios play out, and your legal or security team finds out.
**Scenario 1: You paste your customer data schema into ChatGPT.**
Your database schema for the users table includes field names that reflect your business model. subscription_tier, churn_risk_score, payment_method_last4. You are asking GPT-4 to help optimize a query. The schema goes to OpenAI's servers. Your legal team's concern: the schema reveals your business logic and your customers' financial data structure.
**Scenario 2: You use Copilot on authentication code.**
GitHub Copilot receives your entire open file as context when generating completions. If you are working in src/auth/session-manager.ts, Copilot sees your session validation logic, your JWT structure, your permission model, and whatever comments you have written about how it works. Your security team's concern: your authentication implementation is now on Microsoft's servers.
**Scenario 3: The API key in a comment gets included in context.**
You have a comment from six months ago: // TODO: rotate this before deploy. The key is in a file that is part of the context your IDE sends with the next Copilot request. Your security team's reaction to scenario 3 is not hypothetical concern. It is an incident report.
The Pragmatic Developer's Position
Here is the honest counterpoint: AI tools make you 30 to 40 percent faster on most coding tasks. That is not marketing copy. It is what productivity studies from GitHub, Stripe, and McKinsey have measured. Writing tests, understanding unfamiliar codebases, refactoring, and debugging all go faster with AI assistance.
Banning these tools hurts the company more than using them carefully.
Teams that ban AI tools have three failure modes. First, engineers use them anyway on personal devices or accounts, creating shadow AI behavior that is completely untracked and unauditable. Second, engineers at companies with the ban compete against engineers at companies without it, and the productivity gap compounds over 12 to 18 months. Third, the best engineers leave for environments where they can use the tools they want to use.
The pragmatic position is: use the tools. Use them carefully. Put one layer of protection in place so you can honestly say you took reasonable precautions.
Three Rules That Protect You AND Let You Use AI
These three rules do not require asking permission. They do not require a team rollout. You can implement all three today.
**Rule 1: Route your AI tool through a local proxy.**
A local proxy sits between your IDE and the LLM API. It intercepts outbound calls, mutates proprietary identifiers, blocks secrets, and logs what was transmitted. You keep using the same tool. You just add one environment variable.
Pretense is the tool built specifically for this. Install it globally, start the proxy, set ANTHROPIC_BASE_URL=http://localhost:9339 or OPENAI_BASE_URL=http://localhost:9339, and your tool continues working exactly as before. Every API call is now protected and logged.
The proxy approach also gives you an audit trail. If your security team ever asks what was sent to external AI APIs during a given project, you have a structured log with timestamps, mutations applied, and secrets blocked.
**Rule 2: Never copy-paste secrets.**
This sounds obvious. It is also the source of a large proportion of credential leaks. The pattern is: you are debugging an issue that requires a real API key to reproduce. You temporarily paste the key into a file, or into a comment, or into a test fixture. Then you forget about it. Three weeks later, that file is in AI context. The fix: use environment variable references exclusively in code.
**Rule 3: Use enterprise tiers of tools when your company pays for them.**
If your company has a GitHub Copilot Business or Enterprise subscription, use that account, not your personal Copilot Individual account. The data handling terms are materially different. Enterprise tiers typically include explicit opt-out from training, shorter log retention, and contractual data processing agreements.
The 2-Minute Setup That Covers You
If you do one thing after reading this, do this:
npm install -g pretense
cd your-project
pretense init
pretense start
export ANTHROPIC_BASE_URL=http://localhost:9339That is it. Your AI tool now routes through a local proxy that mutates proprietary identifiers before they leave your machine, blocks secrets, and logs every session. You did not change your workflow. You added two minutes of setup.
When your security team or your manager asks whether you are handling AI tool usage responsibly, the answer is: yes, you are routing through a local proxy with an audit trail. That answer covers you.
[Install Pretense free](/early-access)
Share this article