In September 2025, the National Credit Union Administration (NCUA) published its Artificial Intelligence Compliance Plan, describing how the agency will govern its own use of AI. The plan is driven by the AI in Government Act of 2020 and OMB Memorandum M-25-21.
Even though the plan is written for NCUA’s internal AI adoption, it is highly relevant for credit union IT operations and NCUA cybersecurity requirements. It reveals the control mindset regulators are standardizing around AI: inventory everything, classify risk, apply minimum practices for high impact use, and be able to shut it down quickly if it is not compliant.
Source: https://ncua.gov/ai/ncua-artificial-intelligence-compliance-plan
What the NCUA says it is doing, and why it matters for credit union IT compliance
1) Treating AI as enterprise risk, not an IT experiment
NCUA frames AI adoption as methodical and mission driven, focusing on use cases that increase efficiency while accounting for the agency’s capacity to sustain the technology.
What this signals for IT support for credit unions: If AI is in your environment, whether in a chatbot, fraud tooling, contact center, underwriting, collections, or a vendor platform, expect the conversation to land in governance and risk management, not just “features.” This becomes part of your broader NCUA IT compliance framework.
2) Building an inventory and formal governance
NCUA emphasizes centralized documentation through an AI Use Case Inventory. It also describes formal review of current and proposed AI applications for security, privacy, and technical considerations, with involvement from executive and risk leaders.
What this signals for credit union IT teams: If you cannot answer “Where are we using AI?”, you are already behind. AI shows up in places organizations do not think of as AI, including productivity copilots, CRM automation, voice analytics, and embedded vendor capabilities. A comprehensive security risk assessment should now include AI system mapping.
3) Applying stricter controls for high impact AI
NCUA states it will identify presumed high impact AI use cases using the definition in OMB M-25-21 and apply the memorandum’s minimum risk management practices. It also describes a formal waiver process for exceptions.
What this signals for credit unions: Any AI that could materially affect members should be treated as high impact. This includes credit decisions, pricing, fraud actions, identity verification, collections outcomes, employee monitoring, and anything that impacts access and fairness. These should be included in your cybersecurity assessment process.
4) Aligning to NIST and security and privacy baselines
The plan describes aligning risk practices with NIST SP 800-53 Rev. 5 security and privacy safeguards. It also describes access controls and approval processes to prevent non compliant high impact AI from being deployed.
What this signals for credit unions: You do not need a brand new AI compliance universe. You need to extend your existing NIST cybersecurity framework, security, privacy, vendor management, and risk programs to cover AI specific risks such as data leakage, model behavior, bias and fairness, explainability, and operational resiliency. Organizations already following NIST 800-53 controls have a head start.
5) Planning for fast termination of non compliant AI
NCUA explicitly states that AI systems failing operational, ethical, or regulatory requirements will be terminated. It outlines steps such as restricting access, isolating or shutting down, securing data and assets, documenting rationale, and notifying stakeholders.
What this signals for credit unions: You should have a documented IT incident response plan for AI enabled systems, including vendor exit and containment steps. If AI becomes problematic, you need to be able to stop harm quickly and prove what you did through proper incident response procedures.
The credit union lens: AI risk is bigger than model risk
NCUA also maintains an AI resource hub for credit unions and notes that AI adoption introduces distinct challenges beyond traditional third party vendor management. It highlights concerns such as algorithmic decision making, fair lending compliance, member data privacy, operational resilience, and model risk.
This is a strong hint at the shape of future examinations. AI will be evaluated through multiple lenses at once, including IT, security, compliance, enterprise risk, third party oversight, and member impact—making managed cybersecurity services and ongoing security monitoring increasingly critical.
Reference: https://ncua.gov/regulation-supervision/regulatory-compliance-resources/artificial-intelligence-ai
A practical NCUA-aligned AI compliance checklist for credit unions
1) Build your AI use case inventory (start simple)
Create one list that answers:
- Where is AI used today, including inside vendor products?
- What data does it touch, such as member PII, financial data, voice recordings, or employee data?
- What decisions does it influence, such as recommendations versus approvals, declines, or actions?
- Who owns it, including a business owner and an IT or security owner?
This inventory becomes a foundational element of your security assessment program and should be reviewed as part of regular vulnerability management cycles.
2) Classify high impact use cases and apply stricter controls
Define criteria for high impact aligned to member impact, for example:
- Credit and loan decisions
- Fraud holds and account restrictions
- Identity verification and onboarding
- Pricing, fees, or eligibility decisions
- Any automation that changes member outcomes
Then require stronger guardrails such as independent testing, appropriate human review, evidence of explainability and fairness, and enhanced continuous security monitoring and auditability.
3) Extend third party risk management for AI vendors
If a vendor provides AI features, due diligence should go beyond SOC 2 and uptime. Include these questions in your cybersecurity services evaluation:
- What model is used, and how is data handled?
- Is member data used for training? If so, under what terms?
- How does the vendor manage bias, drift, and model updates?
- What controls exist for admin access, prompt injection, and data exfiltration?
- What logging and audit trails are available?
- How do they support NCUA cybersecurity requirements?
4) Align AI controls to your security and privacy program
Do not reinvent governance. Map AI to existing NIST IT compliance controls, including:
- Access control and approvals for AI deployment (change management)
- Data classification and acceptable use policies for AI tools
- DLP and logging for AI enabled data flows
- Privacy review for data usage and retention
- IT incident response playbooks that include AI specific scenarios
- Vulnerability monitoring for AI systems and data flows
5) Implement continuous monitoring
AI governance is not set it and forget it. Establish comprehensive security monitoring that includes:
- Outcome monitoring such as false positives or negatives, complaints, and overrides
- Drift detection for data or model behavior changes
- Cyber threat detection for prompt injection attempts, unusual access, and exfiltration patterns
- Periodic access reviews and re approval for model or version changes
- Integration with your existing managed detection and response capabilities
6) Document your terminate or rollback process
Your incident response plan should be able to answer:
- Who can shut it down?
- How fast can we contain it?
- How do we preserve evidence and logs?
- What member remediation steps exist if harm occurred?
- How do we communicate internally and externally?
- What good looks like in 2026
If you want a simple target state, aim for:
- A single AI inventory that is owned, current, and reviewed
- Clear decision rights for approval, monitoring, and shutdown
- Documented risk classification, including high impact
- Vendor due diligence tailored to AI and NCUA IT compliance
- Security and privacy controls mapped to AI use within your NIST cybersecurity framework
- Ongoing security monitoring and a tested rollback plan
- Integration with your business continuity planning
That is the difference between “we are experimenting” and “we are governing.”
How TorchLight helps
TorchLight provides specialized IT support for credit unions navigating AI adoption within NCUA compliance requirements. Our cybersecurity services include:
- AI use case inventory and risk classification workshops
- AI specific third party risk and contract reviews
- Security risk assessment and control mapping, including monitoring and logging aligned to NIST 800-53
- Policy updates such as acceptable use, data handling, and change management
- IT incident response additions for AI threats and failures
- Virtual CISO (vCISO) guidance on AI governance integration
- Continuous security monitoring and vulnerability management for AI systems
- Managed cybersecurity support designed specifically for credit union IT needs
If you are already using AI in member facing workflows, or it is embedded in your vendors, now is the time to get your governance foundation right. The direction is clear: inventory, controls, transparency, and the ability to stop what does not meet the standard.
For credit unions seeking comprehensive IT consulting on AI compliance, or looking to strengthen their overall NCUA cybersecurity posture, TorchLight offers the specialized expertise and managed IT services credit unions need to meet evolving regulatory expectations.

