A hospital deploys an AI system to help triage patient inquiries. A bank uses AI to flag suspicious transactions. A law firm implements AI for contract review. Each of these is a high-value use case with proven ROI -- and each carries compliance risks that can result in multimillion-dollar fines, loss of professional licenses, or irreparable harm to patients and clients.
Regulated industries are adopting AI at an accelerating rate. McKinsey's 2025 Global AI Survey found that 67% of healthcare organizations, 74% of financial institutions, and 52% of law firms have deployed AI in at least one production use case. But adoption has outpaced compliance preparation. Only 31% of these organizations reported having a formal AI compliance framework in place.
This guide provides sector-specific compliance frameworks for deploying AI in healthcare, financial services, and legal environments. It's written for the compliance officers, CTOs, and business leaders who need to deploy AI without running afoul of regulators.
The Cross-Industry Compliance Foundation
Before diving into sector-specific requirements, three compliance principles apply to AI in every regulated industry:
Principle 1: Explainability
Regulators in every sector increasingly require that AI-driven decisions be explainable. This doesn't mean you need to explain how a transformer architecture works. It means you need to be able to explain, in plain language, why the AI system produced a specific output or recommendation for a specific input.
For practical purposes, this means:
- Logging all inputs, outputs, and any intermediate reasoning.
- Maintaining model documentation that describes training data, known limitations, and performance characteristics.
- Being able to produce a human-readable explanation for any individual decision.
- Avoiding "black box" models for high-stakes decisions, or wrapping them with explainability layers.
Principle 2: Human Oversight
No regulated industry allows fully autonomous AI decision-making for consequential decisions. Every sector requires some form of human-in-the-loop:
- **Healthcare:** AI can suggest diagnoses, but a licensed clinician must make the final determination.
- **Finance:** AI can flag suspicious activity, but a compliance officer must make the SAR filing decision.
- **Legal:** AI can draft documents, but an attorney must review and take responsibility for the final product.
Your AI system must be designed to facilitate human oversight, not just allow it. This means clear presentation of AI confidence levels, easy access to the underlying data, and workflow designs that prevent rubber-stamping.
Principle 3: Data Governance
Regulated data requires regulated handling at every point in the AI pipeline:
- Data used for training and fine-tuning must comply with the same regulations that govern the original data.
- Model inputs (prompts) that contain regulated data must be handled with appropriate controls.
- Model outputs that contain or are derived from regulated data are themselves regulated.
- Logs and audit trails of AI interactions are regulated artifacts that must be retained and protected.
For a comprehensive overview of data privacy considerations, see our guide on [data privacy in AI applications](/blog/data-privacy-ai-applications).
Healthcare: HIPAA, FDA, and Clinical AI
The Regulatory Landscape
Healthcare AI operates under multiple overlapping regulatory frameworks:
**HIPAA (Health Insurance Portability and Accountability Act)** governs the handling of Protected Health Information (PHI). Any AI system that processes PHI must comply with HIPAA's Privacy Rule, Security Rule, and Breach Notification Rule.
**FDA regulation** applies to AI systems that qualify as medical devices. The FDA's 2024 guidance on "Artificial Intelligence and Machine Learning in Software as a Medical Device" established a framework for regulating AI-based clinical decision support systems. If your AI provides recommendations that clinicians may not independently verify, it likely falls under FDA oversight.
**The 21st Century Cures Act** governs health IT interoperability and information blocking, which affects how AI systems access and share health data.
**State regulations** add additional requirements. For example, California's Confidentiality of Medical Information Act (CMIA) is stricter than HIPAA in several respects.
HIPAA Compliance for AI Systems
HIPAA compliance for AI involves several specific requirements:
**Business Associate Agreements (BAAs).** Any AI model provider that processes PHI on your behalf is a business associate and must sign a BAA. This includes your model hosting provider, your AI platform provider, and any analytics services that receive PHI. Not all AI model providers will sign BAAs -- this significantly limits your provider choices. As of early 2026, Microsoft Azure OpenAI, Google Cloud Vertex AI, and AWS Bedrock have BAA-eligible configurations. Consumer-facing APIs from OpenAI and Anthropic generally do not support BAAs for direct PHI processing.
**Minimum necessary standard.** Your AI system should only process the minimum PHI necessary for its function. A triage chatbot doesn't need a patient's complete medical history -- it needs the current symptoms. Implement data minimization in your prompt construction pipeline.
**Access controls.** AI system access must follow HIPAA's access control requirements (45 CFR 164.312(a)). This means unique user identification, emergency access procedures, automatic logoff, and encryption. Your AI platform's access control system must integrate with your healthcare organization's identity management -- see our guide on [enterprise SSO AI integration](/blog/enterprise-sso-ai-integration) for implementation strategies.
**Audit controls.** HIPAA requires audit trails for all systems that access PHI (45 CFR 164.312(b)). For AI systems, this means logging every interaction that involves PHI, including the prompt, the response, who initiated the interaction, and when. These logs must be retained for at least six years.
**De-identification.** When possible, de-identify PHI before AI processing. HIPAA defines two de-identification methods: Expert Determination (a statistician certifies that re-identification risk is very small) and Safe Harbor (removal of 18 specified identifiers). De-identified data is not PHI and not subject to HIPAA -- which dramatically simplifies your AI compliance.
FDA Considerations
If your AI system provides clinical decision support, determine whether it falls under FDA oversight:
The FDA **does not** regulate Clinical Decision Support (CDS) software that:
- Is not intended to acquire, process, or analyze a medical image or signal.
- Displays or makes available the underlying data for independent clinician review.
- Is intended for qualified clinicians who independently review the basis for the recommendation.
- Does not replace clinical judgment.
The FDA **does** regulate CDS that:
- Provides time-critical diagnostic information where independent clinician review is impractical.
- Analyzes medical images or pathology slides.
- Processes physiological signals (e.g., ECG, EEG) for diagnostic purposes.
- Provides recommendations that a clinician cannot independently verify.
If your AI falls under FDA regulation, you'll need to follow the Software as a Medical Device (SaMD) pathway, which involves pre-market notification (510(k)) or pre-market approval (PMA) depending on risk classification.
Financial Services: SOX, FINRA, OCC, and Beyond
The Regulatory Landscape
Financial services AI faces the most complex regulatory environment of any sector:
**SOX (Sarbanes-Oxley Act)** requires internal controls over financial reporting. AI systems that produce, analyze, or validate financial data fall under SOX Section 404 internal control requirements.
**FINRA rules** govern broker-dealer communications and require that AI-generated communications with customers be fair, balanced, and not misleading. FINRA Rule 2210 applies to AI-generated marketing content, chatbot interactions, and automated advisor communications.
**OCC Guidance** (Office of the Comptroller of the Currency) requires national banks to manage model risk for all models, including AI/ML models, per OCC Bulletin 2011-12 (SR 11-7) on Model Risk Management.
**The SEC** has proposed rules specifically addressing AI use by investment advisers and broker-dealers, focusing on conflicts of interest arising from predictive data analytics.
**The CFPB** (Consumer Financial Protection Bureau) requires that adverse action notices (credit denials, rate increases) provide specific, accurate reasons -- which is challenging when an AI model contributed to the decision.
**The EU AI Act** classifies financial AI systems (credit scoring, insurance pricing, fraud detection) as high-risk, imposing requirements for transparency, data quality, human oversight, and documentation.
Model Risk Management (MRM) for AI
OCC Bulletin 2011-12 requires financial institutions to maintain a Model Risk Management framework. For AI models, this means:
**Model validation.** Independent teams must validate AI models before deployment and periodically thereafter. Validation includes:
- Assessing the model's conceptual soundness.
- Testing the model against out-of-sample data and stress scenarios.
- Evaluating the model's performance across different demographic groups for fair lending compliance.
- Benchmarking against alternative approaches.
**Ongoing monitoring.** Production AI models must be continuously monitored for:
- Performance degradation (model drift).
- Bias amplification over time.
- Unexpected behavior on new data distributions.
- Accuracy against ground truth when available.
**Model inventory.** Maintain a comprehensive inventory of all AI models in production, including their purpose, inputs, outputs, validation status, and risk tier.
**Change management.** Any changes to AI models -- retraining, fine-tuning, prompt engineering changes, provider switches -- must go through formal change management processes with appropriate documentation and approval.
Fair Lending and AI Bias
Financial AI systems must comply with fair lending laws (Equal Credit Opportunity Act, Fair Housing Act) and cannot discriminate based on protected characteristics. This is more nuanced than simply removing protected variables from the model:
- **Proxy discrimination.** Even if you don't feed race or gender into your model, other variables (zip code, occupation, education) can serve as proxies. Regulators evaluate disparate impact, not just disparate treatment.
- **Testing requirements.** Conduct regular adverse impact testing across protected classes. If your AI model produces significantly different outcomes for different demographic groups, you need to demonstrate business necessity and that no less discriminatory alternative exists.
- **Explainability for adverse actions.** When an AI contributes to an adverse decision (loan denial, rate increase), the CFPB requires that the reasons provided to the consumer accurately reflect the factors that actually influenced the decision. "The AI model denied you" is not an acceptable reason.
Algorithmic Trading and AI
AI systems used in trading face additional requirements:
- **SEC Rule 15c3-5** requires risk controls for market access, including pre-trade risk checks.
- **FINRA Rule 3110** requires supervision of all trading activities, including those conducted by AI systems.
- **The Dodd-Frank Act** imposes reporting requirements on derivatives trading, including AI-driven trades.
- Kill switches and human override capabilities are mandatory for any AI system that can execute trades.
Legal Industry: Ethics Rules, Privilege, and Competence
The Regulatory Landscape
The legal industry is regulated primarily through state bar associations and the American Bar Association (ABA) Model Rules of Professional Conduct. AI compliance in legal contexts centers on three core obligations:
**Competence (Model Rule 1.1).** Attorneys must provide competent representation, which includes understanding the technology they use. The ABA's 2024 Formal Opinion on AI in Legal Practice clarified that attorneys must understand AI's capabilities and limitations, including its tendency to generate plausible but incorrect information ("hallucinations").
**Confidentiality (Model Rule 1.6).** Attorneys must protect client confidences. Sending client information to an AI model provider raises serious confidentiality concerns -- you are potentially sharing privileged information with a third party.
**Supervision (Model Rules 5.1 and 5.3).** Partners and supervising attorneys are responsible for ensuring that AI tools are used appropriately by associates, paralegals, and staff.
Protecting Attorney-Client Privilege
Attorney-client privilege is the most critical concern for AI in legal contexts:
**The third-party disclosure problem.** Attorney-client privilege can be waived if the communication is disclosed to a third party. When you send a privileged document to an AI model provider for analysis, you are arguably disclosing it to a third party. Legal scholars and ethics boards are split on whether AI providers qualify as agents of the attorney (which would preserve privilege) or independent third parties (which would waive it).
**Practical safeguards:**
- Use AI providers that contractually agree to confidentiality terms consistent with legal ethics obligations.
- Use providers that do not train on customer data and can demonstrate data isolation.
- Consider self-hosted models for the most sensitive work.
- Maintain detailed logs of what information was sent to AI systems, in case privilege issues arise.
- Some jurisdictions now recognize AI processing as analogous to other technology service providers (e-discovery, cloud storage) that don't waive privilege if appropriate protections are in place.
**Court disclosure requirements.** Following the widely-publicized Mata v. Avianca incident in 2023, many federal and state courts now require attorneys to disclose the use of AI in court filings. As of early 2026, over 30 federal courts have standing orders or local rules addressing AI disclosure. Non-disclosure can result in sanctions.
AI Hallucination and Professional Responsibility
AI hallucination -- generating plausible but fabricated information -- is the most acute risk for legal AI:
- **Fabricated case citations.** AI models can generate realistic-looking case citations that refer to cases that don't exist. Multiple attorneys have been sanctioned for filing briefs containing AI-generated fake citations.
- **Incorrect legal analysis.** AI may state legal principles that sound correct but misstate the law, especially for nuanced or jurisdiction-specific issues.
- **Outdated information.** AI models have training data cutoffs and may provide advice based on superseded law.
**Mitigation strategies:**
- Never rely on AI-generated legal citations without independent verification.
- Implement validation workflows that require attorneys to verify every factual claim and legal citation.
- Use AI systems that provide source attribution, allowing attorneys to trace conclusions back to specific documents.
- Treat AI output as a first draft that requires the same level of review as work from a junior associate.
Billing and AI
The use of AI in legal work raises billing ethics questions:
- If AI completes a task in 2 minutes that would have taken a paralegal 2 hours, what is the appropriate charge?
- ABA Formal Ethics Opinion 93-379 requires that fees be reasonable. Charging a client for 2 hours of work when AI did it in 2 minutes raises reasonableness concerns.
- Emerging best practice is to charge based on the value of the work product rather than the time AI took to produce it, with disclosure to clients about AI use.
- Some firms now include AI use policies in their engagement letters, providing transparency about how and when AI tools are used.
Building a Cross-Industry AI Compliance Framework
Regardless of your specific industry, follow this framework to build AI compliance:
Step 1: Regulatory Mapping
Document every regulation that applies to your AI use cases. For each regulation, identify:
- Which AI use cases are affected.
- What specific requirements apply (data handling, explainability, human oversight, documentation).
- What enforcement mechanisms exist (fines, sanctions, license revocation).
- What the regulatory trajectory looks like (are requirements likely to tighten?).
Step 2: Risk Assessment
For each AI use case, assess:
- What is the worst-case outcome if the AI fails or produces incorrect output?
- What data does the AI process, and what regulations govern that data?
- What decisions does the AI influence, and are those decisions reviewable?
- What is the blast radius of a compliance failure?
Step 3: Control Implementation
Based on your risk assessment, implement controls proportional to the risk:
| Risk Level | Controls | |-----------|----------| | Low | Logging, periodic review, user training | | Medium | Human-in-the-loop, output validation, regular audits | | High | Independent model validation, real-time monitoring, regulatory pre-approval | | Critical | Formal MRM framework, continuous validation, board-level oversight |
Step 4: Documentation and Audit Trail
Maintain comprehensive documentation:
- Model cards describing each AI system's purpose, capabilities, limitations, and validation results.
- Data lineage records showing where training and inference data comes from and how it's handled.
- Decision logs for every consequential AI-assisted decision.
- Compliance test results from regular adversarial testing and bias assessments.
- Change management records for all AI system modifications.
Step 5: Continuous Monitoring and Improvement
Compliance is not a one-time achievement:
- Monitor AI systems continuously for drift, bias, and performance degradation.
- Stay current with regulatory changes -- the AI regulatory landscape is evolving rapidly.
- Conduct periodic compliance reviews (at minimum annually, more frequently for high-risk systems).
- Engage with regulators proactively -- many agencies welcome dialogue about AI compliance approaches.
For a broader perspective on evaluating AI platforms with compliance in mind, consult our [AI vendor evaluation checklist](/blog/ai-vendor-evaluation-checklist).
Girard AI and Regulated Industry Compliance
Girard AI is built for regulated industries. Our platform provides comprehensive audit logging, configurable data handling policies, human-in-the-loop workflow enforcement, and model explainability features designed to meet the compliance requirements of healthcare, financial services, and legal organizations.
We understand that compliance is a prerequisite, not a feature. Our enterprise team works directly with compliance officers and legal counsel to ensure that our platform meets your specific regulatory requirements.
[Contact our enterprise team](/contact-sales) to discuss your regulated industry compliance needs, or [sign up](/sign-up) to explore Girard AI's compliance capabilities in a sandbox environment.