SOC2 Compliance for AI-Assisted Development Teams
SOC2 Type II auditors are increasingly asking about AI tool usage controls. CC6.7 requires demonstrating that third-party data access is controlled. If your team uses Copilot, Cursor, or Claude, you need a documented control. Here is what to build.
Why SOC2 Auditors Are Asking About AI Tools Now
SOC2 Type II audits evaluate the operational effectiveness of controls over a 6-12 month period. AI coding tools became mainstream in 2023-2024. That means auditors evaluating controls through 2025 and 2026 are looking at periods when your developers were already using Copilot, Claude, and Cursor daily.
The Trust Services Criteria do not mention AI coding tools specifically. They do not need to. The relevant criteria are written broadly enough to cover any mechanism by which data leaves your control environment.
The Relevant Controls
CC6.7: Transmission of Confidential Information
CC6.7 requires that entities "restrict the transmission of confidential information using end-user messaging systems, email, or other communication mechanisms by authorized users to only authorized external parties."
AI coding tool API calls are transmissions. The code sent in prompts is potentially confidential information. The question your auditor will ask: what controls ensure that only authorized data is transmitted?
CC6.1: Logical Access Restrictions
CC6.1 requires that logical access is restricted to authorized users. When developers send code to AI APIs, that code is being processed by a third party. CC6.1 requires demonstrating that the scope of this access is controlled and documented.
CC7.2: Monitoring for Anomalies
CC7.2 requires monitoring for anomalies and events that could indicate security threats. Bulk code transmission to AI APIs, particularly from accounts that do not typically make API calls, is exactly the type of anomaly this control addresses.
What Auditors Will Ask For
Based on audit preparation conversations with security teams in 2025-2026, auditors are asking for:
1. A written policy governing AI tool usage that includes data classification restrictions 2. Technical evidence that the policy is enforced (not just communicated) 3. Audit logs showing what was transmitted and when 4. Evidence of incident review process when policy violations occur
A policy document alone satisfies none of these. You need the technical controls and the logs.
Building the Control Stack
Step 1: Write the Policy
Your AI tool usage policy needs to cover:
- Approved tools (name them specifically; a whitelist is more auditable than a blacklist) - Data classification rules (define which data categories may and may not be included in AI prompts) - Enforcement mechanisms (how the policy is enforced, not just communicated) - Exception handling (how developers request approval for edge cases) - Incident reporting (what to do when sensitive data is accidentally sent)
Template control statement you can adapt:
"[Company] permits the use of approved AI coding tools for software development activities. Developers may not include source code containing [data classification categories] in AI tool prompts without prior approval. All AI API traffic from developer workstations is routed through the approved Pretense proxy, which enforces data mutation controls and generates audit logs for compliance review. Audit logs are retained for [retention period] and reviewed [quarterly/on incident trigger]."
Step 2: Implement the Technical Control
The most auditor-friendly technical control is a proxy that sits between developer workstations and AI APIs. This approach provides:
- A single enforcement point (easier to audit than per-developer configurations) - Complete audit logs (every API call is logged with timestamp, content hash, mutation record) - Demonstrable enforcement (you can show the auditor traffic flows)
Pretense deploys as a local proxy on each developer workstation or as a network-level proxy for the engineering subnet:
# Per-developer deployment
npm install -g pretense
pretense init# Environment variable routes all AI API traffic through proxy export ANTHROPIC_BASE_URL=http://localhost:9339 export OPENAI_BASE_URL=http://localhost:9339 ```
For the network-level approach, Pretense can be deployed as a Docker container that handles routing for the entire engineering network. See the [deployment documentation](/docs) for the Docker Compose configuration.
Step 3: Generate the Audit Artifacts
Pretense generates SOC2-ready audit logs in JSON format:
{
"timestamp": "2026-04-03T14:23:11Z",
"developer": "engineer@company.com",
"repository": "payment-service",
"file": "src/payments/processor.ts",
"identifiers_scanned": 47,
"identifiers_mutated": 12,
"secrets_detected": 0,
"model": "claude-3-5-sonnet",
"tokens_sent": 1842,
"mutation_map_id": "mm_7x2k9p"
}Each log entry includes what was scanned, what was mutated, whether any secrets were detected (and blocked), and the full mutation record for reversal verification.
Pretense also generates SOC2-formatted compliance reports on demand:
pretense report --format soc2 --period 2026-01-01:2026-03-31 --output q1-2026-ai-security.pdfThe report includes control effectiveness metrics, policy exception counts, incident log, and mutation coverage statistics.
The Four Artifacts Your Auditor Wants
When your auditor asks about AI tool controls, prepare these four artifacts:
| Artifact | What It Shows | How to Generate |
|---|---|---|
| Written AI tool policy | Policy exists and covers required scope | Policy document, dated and approved |
| Proxy configuration evidence | Technical control is deployed | pretense status --format audit |
| 90-day audit log sample | Control operates continuously | pretense report --period [range] |
| Incident review records | Anomalies are reviewed and resolved | Exported from Pretense dashboard |
With these four items, you can answer every AI tool question in a SOC2 review.
Common Audit Findings to Avoid
Finding: No documented AI tool policy
The fix is straightforward: write the policy, get it approved, distribute it. The harder part is making it enforceable rather than advisory.
Finding: Policy communicated but not technically enforced
This is the finding that a proxy control addresses directly. "We sent an email" is not a control. A proxy that enforces mutation and generates audit logs is.
Finding: Audit logs do not include request content
Generic network logs capture connection metadata but not request bodies. Your auditor cannot verify what was sent from a TCP log. The proxy log must include content evidence (even hashed) to be useful.
Finding: Retention period not defined
Define a specific retention period in your policy and configure the audit log accordingly. 12 months is standard for SOC2.
Getting to Audit-Ready
Timeline for a team starting from zero:
- Week 1: Write and approve the AI tool policy - Week 2: Deploy Pretense proxy across the engineering team - Week 3-4: Validate audit log completeness and format - Ongoing: Monthly review of mutation statistics and any detected anomalies
The SOC2 conversation becomes straightforward when you have 90 days of clean audit logs and a policy that matches what the technical control actually enforces.
[Download a SOC2 control documentation template](/trial) or [see how Pretense integrates with your compliance workflow](/use-cases).
Share this article