"I started with compliance and outcome like simultaneously."
Chuck was explaining how he built his mortgage AI product. In financial services, you can't ship first and figure out compliance later. The regulators will find you.
So he inverted the typical startup approach. Instead of building features and then checking if they're allowed, he built the compliance framework first—then built features that fit inside it.
The Compliance Problem
AI in regulated industries faces a fundamental challenge: AI is unpredictable by nature, and regulators want predictability.
When a loan officer talks to a customer, they're trained on exactly what they can and can't say. There are scripts. There are disclosures. There are prohibited phrases. The compliance department reviews calls and catches violations.
When an AI talks to a customer, who's responsible? What if the AI says something wrong? What if it makes promises the company can't keep? What if it violates fair lending laws by treating customers differently?
These aren't hypothetical concerns. Regulators are already paying attention. The CFPB has issued guidance on AI in financial services. State banking departments are asking questions. Lawsuits are being filed.
Companies that deploy AI without thinking about compliance are building on sand.
The "Bolt It On" Failure
The typical approach to compliance in tech is: build the product, then hand it to legal/compliance, then fix whatever they flag.
This doesn't work for AI in regulated industries. Here's why:
The AI isn't a static feature you can review once. It generates different outputs for different inputs. Reviewing one conversation doesn't tell you what it will say in the next conversation.
Compliance isn't just about what the AI says. It's about what data it collects, how it stores it, who can access it, how long it's retained. These are architectural decisions that are expensive to change later.
Regulations vary by state. A mortgage AI that's compliant in California might violate Tennessee rules. You need to build multi-state compliance into the foundation.
If you try to bolt compliance onto an already-built AI, you end up with:
- Expensive rewrites when compliance issues are discovered
- Awkward workarounds that make the product worse
- Ongoing whack-a-mole as new issues surface
- Legal exposure from the period before you fixed things
Compliance as Architecture
Chuck's approach: build the compliance framework first, then build features inside it.
"Think about it as a house. The foundation, the plumbing, the water supply at the property line—that is the compliance layer."
What this means in practice:
Knowledge base structure: Every piece of information the AI can share is tagged with its compliance status. Can this be said to any customer? Only in certain states? Only with specific disclosures? The AI doesn't improvise—it draws from approved content.
Guardrails as defaults: The AI is configured to refuse to make predictions, promises, or recommendations. "You'll need to discuss that with your loan officer" isn't a cop-out—it's compliant behavior.
State-specific rules: Regulations differ by state. A 50-state compliant system needs to know which rules apply to which customer and behave accordingly. This is baked in, not patched on.
Audit trails: Every conversation is logged. Not just the words, but the decisions the AI made—what information it accessed, what rules it applied, what it chose not to say. If a regulator asks "why did the AI say this?", you can answer.
The Customer Experience Challenge
Building compliance-first doesn't mean building compliance-only. The AI still needs to be useful.
The danger is creating something so locked down that it can't help anyone. "I'm sorry, I can't discuss that" to every question isn't compliance—it's uselessness.
Chuck's balance: the AI educates and informs, but doesn't advise.
Customer: "Should I get an FHA loan or conventional?" Bad AI response: "FHA is better for first-time buyers with low credit." (Making a recommendation without knowing the customer's situation—compliance violation) Good AI response: "FHA loans allow lower down payments and are insured by the government. Conventional loans can have similar down payments but may include mortgage insurance. Your loan officer will review your specific situation and help you understand which fits better."
The customer learned something. The AI didn't give advice. The customer is set up for a productive conversation with the loan officer.
This is compliance and good customer experience. They don't have to be in conflict.
The Speed Advantage
Here's the counterintuitive benefit of building compliance-first: it actually makes iteration faster.
When compliance is a foundation rather than a gate, you don't have to stop and get approval for every change. The architecture ensures changes stay within bounds.
Chuck: "If you want to add that layer, give me 45 minutes, I'll be right back."
He can add new loan types, new state coverage, new conversation flows—all within the same compliance framework. The guardrails are already in place. He's not waiting for legal review on every update.
Compare that to a company that bolted compliance on: every change requires a new review, because there's no framework ensuring the change is compliant by design.
The Regulatory Future
Regulation of AI in financial services is coming. It's not a question of if, but when and how strict.
The companies that will survive that regulatory wave are the ones that built compliance into their foundation. They'll be able to demonstrate:
- Their AI operates within clear guardrails
- They can audit any decision the AI made
- They've thought about fair lending, privacy, and consumer protection
- They're not just fast—they're responsible
The companies that didn't think about compliance will scramble. Some will get enforcement actions. Some will shut down. Some will spend months or years rebuilding.
Beyond Financial Services
This principle applies to any regulated industry:
- Healthcare: HIPAA, patient privacy, medical advice regulations
- Legal: Unauthorized practice of law, confidentiality
- Insurance: State regulations, claims handling requirements
- Education: FERPA, student privacy
- Employment: Discrimination laws, hiring compliance
If AI touches regulated activities, compliance can't be an afterthought.
Building It Right
If you're building AI for a regulated industry, here's the framework:
-
Start with regulations. Before writing code, understand every regulation that applies. Federal, state, industry-specific. Build a compliance map.
-
Design architecture around compliance. Your technical decisions should make compliance easier, not harder. Logging, access controls, content management—design these with auditors in mind.
-
Build guardrails as features. The AI's limitations aren't bugs—they're features. "The AI is unable to provide specific advice" is compliance working correctly.
-
Test compliance, not just functionality. Your QA should include compliance scenarios. "What happens if a customer asks for advice?" "What happens in a protected class discrimination scenario?"
-
Document everything. Regulators will ask why your AI does what it does. Have answers ready. Not just "the AI decided"—but "here's the architecture that ensures compliant behavior."
-
Stay current. Regulations change. New guidance gets issued. Build processes to update your compliance layer as rules evolve.
The Bottom Line
Compliance isn't a feature. It's a foundation.
Companies that build AI for regulated industries without compliance at the core are taking on risk they may not fully understand. When enforcement actions come—and they will—the companies that built right will keep operating. The ones that didn't will have expensive lessons to learn.
Start with compliance. Build inside the guardrails. Move fast within the rules.
That's not a limitation. It's a competitive advantage.
Sources
References & Further Reading
- CFPB Guidance on AI in Financial Services — Consumer Financial Protection Bureau guidance on AI compliance requirements
- AI Compliance in Regulated Industries — McKinsey analysis of regulatory considerations for AI deployment
- Building Trustworthy AI Systems — NIST AI Risk Management Framework for building compliant AI systems