Back to Blog
·13 min read
SOC2ComplianceEnterpriseCISO

Your SOC2 Auditor Will Ask About AI Code Security. Here Is What to Say.

SOC2 CC6.7 and CC7.2 now effectively require demonstrating control over AI tool usage. Here is a practical guide with ready-to-use control documentation, the four artifacts your auditor wants, and a template control statement you can use today.

The Audit Question Nobody Prepared For

In 2025, SOC 2 auditors began systematically asking about AI coding tool usage across all Type II engagements. The question usually surfaces in the Change Management or Logical Access controls:

*"Do engineers use AI coding assistants? What data is transmitted to AI providers? What controls are in place?"*

Most security teams scramble. They do not have good answers because the tooling evolved faster than the compliance frameworks.

The SOC 2 Trust Services Criteria That Apply

AI coding tool usage touches multiple TSC domains:

**CC6.1 — Logical Access Security**: AI tools require API keys. Are those keys managed with the same rigor as other access credentials?

**CC6.6 — Transmission Security**: Code transmitted to AI providers is data in transit. What encryption is in place? Are providers SOC 2 compliant themselves?

**CC7.2 — System Anomalies**: Unusual spikes in AI API usage could indicate misuse. Is this traffic monitored?

**PI1.2 — Privacy — Data Classification**: Is there a process to prevent developers from sending classified data to AI providers?

What Auditors Are Looking For

Based on SOC 2 audit conversations in 2025 to 2026, auditors now expect four things:

**1. A data classification policy that covers AI tools** — explicitly stating which data categories may not go to external AI providers and which providers are approved.

**2. Technical controls that enforce the policy** — a policy document without technical enforcement gets a finding. Auditors want to see that you cannot accidentally send classified data because the system prevents it.

**3. Evidence of monitoring** — audit logs showing what was sent, by whom, and when. Evidence that those logs are reviewed.

**4. Vendor assessment for AI providers** — SOC 2 compliance from OpenAI and Anthropic covers their own systems. Your obligation is to assess whether terms of service, data retention, and security practices meet your requirements.

The Control Framework

Policy Layer

markdown

Approved Tools: GitHub Copilot Business, Claude Code via Pretense proxy, Cursor with Pretense proxy

Data Classification Rules: Public — Yes, unrestricted Internal — Yes, with mutation proxy active Confidential — No — must use local model only Restricted — Blocked automatically by proxy Secrets — Blocked automatically by proxy ```

Technical Control Layer

yaml
# pretense.config.yaml
proxy:
  port: 9339
  approved_providers:
    - api.openai.com
    - api.anthropic.com
secret_scan:
  enabled: true
  block_on_critical: true
mutation:
  enabled: true
audit:
  enabled: true
  retention_days: 365
  siem_export: true

Evidence Layer

The audit log provides the evidence auditors need:

json
{
  "timestamp": "2025-04-17T09:14:33Z",
  "event": "prompt_transmitted",
  "developer": "engineer@company.com",
  "provider": "anthropic",
  "prompt_token_count": 847,
  "secrets_blocked": 0,
  "mutations_applied": 12,
  "classification_level": "internal"
}

The Three Common Gaps

**Gap 1: No logging** — You have a policy, approved tools, but no logs showing the policy is enforced. "We trust our developers" is not an acceptable control answer.

**Gap 2: Policy without enforcement** — The policy says do not send PII to AI tools. There is no technical control that prevents it. Auditors treat this as an ineffective control.

**Gap 3: Vendor management gaps** — OpenAI is SOC 2 Type II certified. But have you reviewed their report? Do you have a record of that review? Vendor management requires documentation.

The Evidence Package

For a clean SOC 2 audit related to AI coding tools, prepare:

1. AI tool policy (current version, version history) 2. Approved provider list with justification for each 3. Vendor SOC 2 reports from OpenAI, Anthropic (download from their trust portals) 4. Technical control documentation (proxy configuration, scanning rules) 5. Audit log samples (3 to 6 months) 6. Incident log (any policy violations and how they were handled) 7. Training records (evidence developers were trained on the policy)

Timeline to Compliance

WeekTask
1Inventory current AI tool usage
2Draft AI tool policy, get sign-off
3Deploy proxy, enable logging
4Train developers, push configuration
5-8Monitor, tune, collect evidence
9-12Review evidence, prepare audit package

The technical deployment takes 1 to 2 days. The policy and evidence collection takes 2 to 3 months to build up sufficient evidence for an audit period. Start now, not the week before your audit.

[Download SOC2 Control Template](/trust)

Share this article

Ask me anything