Back to Blog
·8 min read
Case StudyFinTechEnterpriseCompliance

Case Study: How a FinTech Team Protected 2.3M Lines of Proprietary Code While Using Claude

A synthetic composite case study showing how a regulated FinTech team deployed Pretense after a CISO stop-work order on AI tools, protected 2.3M proprietary identifiers, and maintained 94% developer productivity.

Note on This Case Study

Meridian Financial is a synthetic composite example created from real deployment patterns observed across Pretense's early customer base. It is not a single real company. Specific numbers have been constructed to reflect realistic outcomes for a FinTech engineering organization of this size.

The Problem

Meridian Financial (a synthetic composite based on real customer patterns) is a B2B payments platform with 85 engineers and 2.3 million lines of TypeScript and Python code built over six years. Their codebase encodes proprietary payment routing logic, fraud detection heuristics, and settlement timing algorithms that represent the core competitive advantage of the business.

In late 2025, the engineering team began using Claude for code review, test generation, and refactoring. Productivity gains were immediate. Engineers were completing code reviews 40% faster. Test coverage climbed. Junior engineers were shipping production-quality code with significantly less senior oversight.

Then the compliance team ran a standard quarterly audit.

The audit flagged outbound API traffic to Anthropic's endpoints that included identifiers from core payment processing modules. The finding was not a breach. No data was misused. But the identifiers appearing in LLM API calls represented exactly the type of proprietary system detail that Meridian's ISO 27001 program, their enterprise customer contracts, and their pending SOC2 Type II audit all required them to protect.

The CISO issued a stop-work order: no AI coding tools until a solution was in place.

The Failed Alternatives

The engineering leadership evaluated three paths before finding Pretense.

**Option 1: Enforce an AI tools ban**

Within two weeks of the ban, the compliance team discovered that six engineers had created personal Anthropic accounts and were routing work code through the Claude.ai web interface from their personal devices. The ban created shadow AI behavior that was harder to detect and audit than the original API traffic.

**Option 2: Deploy a private LLM**

Meridian's infrastructure team scoped out a self-hosted Claude equivalent. The estimate came back at $45,000 per month for GPU infrastructure that could match API-hosted model quality. That number killed the option in the first meeting. The ROI calculation did not close.

**Option 3: Manual code review gate**

The security team proposed requiring human review of every code snippet before it was sent to any AI tool. Two engineers were assigned to the review queue full time. Within three days the queue was backed up by 72 hours. Engineers stopped using AI tools rather than wait. The productivity gains evaporated, and the security team had created a bottleneck that was not sustainable.

Deploying Pretense

A staff engineer on the platform team found Pretense through a Hacker News post and ran the local setup in under five minutes.

bash
npm install -g pretense
cd meridian-payments-core
pretense init
pretense start
# Pretense proxy running on localhost:9339

The team then set ANTHROPIC_BASE_URL=http://localhost:9339 in the shared development environment configuration. Every engineer's Claude Code sessions began routing through the local proxy automatically on next login.

No workflow changes were required. Engineers continued using Claude Code exactly as before. The proxy intercepted API calls, mutated proprietary identifiers before they left the network, and reversed the mutations in responses.

typescript
// What engineers type (real code, never transmitted):
async function routePaymentToSettlement(
  payment: PaymentRecord,
  settlementWindow: SettlementConfig
): Promise<RoutingResult> {
  return FraudGateway.evaluate(payment, settlementWindow);

// What Claude receives (PretenseMut synthetic identifiers): async function _fn7c2a( _v3d8b: _cls9f1e, _v5a4c: _cls2b7d ): Promise<_cls4e6f> { return _cls8a3b.evaluate(_v3d8b, _v5a4c); } ```

The CISO reviewed the architecture in a 30-minute call. The key properties that resolved the compliance concern: mutation happens on-device before any network transmission, the mutation map never leaves Meridian's infrastructure, and every session produces a structured audit log entry.

The stop-work order was lifted within 48 hours of the initial deploy.

The Results

**Zero proprietary identifiers transmitted to Anthropic servers**

Over the first 90 days, Pretense processed 847,000 mutations across Meridian's codebase. Every API call to Anthropic contained only synthetic identifiers. The compliance team ran a packet inspection audit in week 4 and confirmed that no real identifier names appeared in any outbound API traffic.

**94% of developer productivity maintained**

The team tracked story points completed, code review cycle time, and test coverage changes across the 90-day period. Compared to the pre-stop-work AI-assisted baseline, productivity was at 94% after Pretense was deployed. The 6% delta was attributed to occasional mutation overhead on large context windows, which the team addressed by scoping context to relevant modules rather than entire codebases.

**Compliance team formally approved AI tool use**

The audit finding that triggered the stop-work order was resolved. Meridian's compliance lead wrote a formal memo documenting the Pretense architecture and approving AI coding tool use as compliant with their ISO 27001 controls and SOC2 requirements. The memo specifically cited the on-device mutation model and the audit trail format as the two factors that made approval possible.

**Audit trail ready for SOC2 evidence package**

Every Claude Code session produced a structured log entry. At the end of 90 days, the security team had 847,000 log entries documenting exactly which identifiers were protected, which model was used, and what the outcome was. The logs were exported to CSV and submitted as evidence in their SOC2 Type II audit package.

3 Months Later

At the 90-day mark, Meridian's engineering leadership ran a retrospective on the Pretense deployment.

**Zero security incidents** related to AI tool use in the period. The compliance team's initial concern, that proprietary identifiers would appear in third-party infrastructure, had not materialized once. Packet inspection audits run at weeks 4, 8, and 12 all came back clean.

**22% improvement in code quality scores** compared to the pre-AI-tools baseline (not just compared to the ban period). The team attributed this to engineers being more willing to ask Claude for test coverage and edge case analysis when they knew the code was protected. During the ban period, engineers had been reluctant to use AI tools even for non-sensitive work because of ambiguity about what was acceptable.

**CISO presented Pretense at the board meeting**. At Meridian's Q1 board meeting, the CISO presented AI tool adoption as a security innovation rather than a security risk. The framing was that Meridian had solved a problem that most financial services firms had not: enabling the productivity benefits of AI coding tools while maintaining provable compliance with data protection requirements.

The board approved budget for Pretense Enterprise tier and expansion to the full 85-engineer team.

Key Metrics at a Glance

MetricValue
Mutations processed (90 days)847,000
Unique identifiers protected2,300,000+
Audit log entries847,000
False positives (legitimate code blocked)0
Secrets blocked412
Developer productivity vs. baseline94%
Time to resolve CISO stop-work order48 hours
SOC2 audit evidence entries generated847,000
Security incidents in 90 days0

What Made the Difference

Three architectural properties resolved Meridian's compliance concern that no other evaluated option could address:

**Local-first mutation**: The mutation engine runs on the engineer's machine. Nothing is transmitted to Pretense's infrastructure. The third-party risk surface is limited to the LLM provider receiving synthetic (non-proprietary) identifiers.

**Deterministic reversal**: The LLM response comes back with synthetic identifiers that Pretense reverses perfectly. Engineers get working code with real names. There is no degradation of output utility compared to unprotected API calls.

**Structured audit trail**: The compliance team needs to prove that controls were in place, not just that the team followed a policy. Pretense's per-session audit log is machine-readable evidence that specific identifiers were protected in specific sessions at specific times.

For regulated industries where AI tool adoption is blocked by compliance concerns rather than security concerns, these three properties are the decision factors.

[Book a demo for your team -> /demo](/demo)

Share this article

Ask me anything