The AI Coding Security Crisis Is Real
Real incidents, real fines, real data. This is what happens when engineering teams adopt AI coding tools without protection.
Documented Incidents
Click any incident to see what happened and how Pretense would have prevented it.
Samsung: Proprietary Code via ChatGPT
Engineers at Samsung Semiconductor sent proprietary source code, meeting notes, and hardware specs to ChatGPT for debugging help.
Moltbook: 1.5M API Keys in Vibe-Coded App
A vibe-coded app built using AI assistants committed 1.5 million third-party API keys to a public repository. The AI-generated code lacked secrets hygiene.
CamoLeak: AWS Keys via Copilot Invisible Markdown
CamoLeak (CVSS 9.6) demonstrated that GitHub Copilot could silently exfiltrate AWS credentials by embedding them in invisible Markdown image tags within AI-generated code suggestions.
GitHub Copilot Secret Leak Rate: 6.4%
CUHK research published at ACM FSE 2024 found that repositories using GitHub Copilot leak secrets at a 6.4% rate versus 4.6% for non-Copilot repos. That is 40% higher.
LiteLLM Supply Chain: All API Keys Stolen
LiteLLM, a popular LLM proxy with 41.8K GitHub stars, was compromised via a supply-chain attack. All API keys routed through it were exposed.
OWASP LLM Top 10 Coverage
The OWASP LLM Top 10 defines the canonical AI security risk taxonomy. Here is where Pretense stands today.
Source: OWASP Top 10 for LLM Applications v1.1 (2025). Roadmap items tracked in the public changelog.
Organizations That Banned AI Coding Tools
81% of developers have security concerns about AI agents. 26% have been blocked by IT/InfoSec from using AI tools at work.
Samsung
Source code + meeting data leaked via ChatGPT (2023)
Apple
Internal data security: no public AI tool approval process
JPMorgan Chase
Regulatory compliance: SEC AI guidance
Goldman Sachs
Client data confidentiality requirements
Wells Fargo
OCC/Federal Reserve AI governance requirements
Deutsche Bank
BaFin AI governance + GDPR controls
Northrop Grumman
ITAR/EAR export control restrictions
US Congress
Capitol classified network security policy
The problem: Banning tools costs more than protecting them. A developer who loses access to AI coding tools loses 30–55% productivity (GitHub/McKinsey 2024). Pretense lets security teams say yes instead of no.
Compliance Requirements
Regulations written before AI coding tools existed now govern how you use them. The gaps are not theoretical. They are audit findings.
§164.312 - Technical Safeguards
Standard AI SaaS accounts have no Business Associate Agreement (BAA). Sending any code referencing PHI (patient IDs, diagnosis codes, FHIR paths). This is an automatic HIPAA violation.
Consequence
OCR fines up to $1.9M per violation category. Healthcare devs effectively blocked from AI tools.
Art. 44 - Third Country Transfers
Italy's data protection authority fined OpenAI €15M in December 2024. EU developers sending code with personal data identifiers to US-hosted APIs risk similar enforcement.
Consequence
4% of global annual revenue or €20M, whichever is higher. US API endpoints = non-adequate third country.
Full enforcement April 2025
PCI-DSS v4 Requirement 6.3.3 mandates that all software components are reviewed for security. Payment function names and variable identifiers sent to external AI = automatic audit finding.
Consequence
Loss of card processing ability. Fines of $5K–$100K/month from card brands. QSA audit failure.
Logical Access Controls
Auditors now explicitly ask: 'What controls govern employee use of AI coding tools?' Most engineering teams have no documented answer, no controls, and no audit trail.
Consequence
SOC2 Type II failure. Loss of enterprise contracts that require SOC2 attestation.
Pretense vs The Problem
Same developer workflow. Same AI response quality. Zero proprietary identifiers leave your machine.
// Your code - sent to Anthropic as-is
async function getUserToken(userId: string) {
const ANTHROPIC_API_KEY = process.env.ANTHROPIC_API_KEY;
const user = await processPaymentData(userId);
return generateAuthToken(user, ANTHROPIC_API_KEY);
}// What Anthropic actually receives
async function _fn4a2b(_v9c1e: string) {
// [BLOCKED] credential removed by Pretense
const _v3d7f = await _fn8c3d(_v9c1e);
return _fn2a4b(_v3d7f);
}
// Restored response is byte-identical to originalProtect your team's code today
30-second deploy. Local-first. The same protection Samsung needed before 2023. Available now, free, in your terminal.
No telemetry · No cloud dependency · Local-first architecture