Back to Resources
AI & AUTOMATION By The Shore Group Team

AI Governance for Community Banks: What the New Regulatory Guidance Actually Requires

Here's what it means for community banks using AI today

TL;DR

Community banks that use AI tools — whether built internally or purchased from a vendor — now operate under a revised model risk management framework issued jointly by the OCC, Federal Reserve, and FDIC in April 2026. The new guidance is explicitly more flexible and proportionate for community banks than the 2011 framework it replaces. But it still requires institutions to document what AI tools they use, understand the risks they carry, and demonstrate that someone with appropriate authority is responsible for managing those risks. This post explains what the guidance actually says, what it means for community banks specifically, and what practical AI governance looks like when you don't have a model risk team.

Most community banks are using AI in some form. The fraud detection system from the core provider uses it. The BSA/AML screening tool uses it. The credit scoring model that feeds the underwriting workflow uses it. Some banks are beginning to experiment with generative AI for internal research, documentation drafting, or customer inquiry handling.

The question of whether to govern AI is no longer open. The question is what governance looks like for an institution that doesn't have a dedicated model risk officer, a data science team, or a formal AI program. The April 2026 interagency guidance from the OCC, Federal Reserve, and FDIC provides the clearest answer regulators have given so far. And for community banks, the answer is more reasonable than many feared.

⚠️

REGULATORY UPDATE: On April 17, 2026, the OCC, Federal Reserve, and FDIC jointly issued revised interagency guidance on model risk management (Bulletin 2026-13). This replaces the 2011 guidance that has governed model risk management for 15 years. A separate AI-specific RFI covering generative AI and agentic AI is expected from the agencies in the near term. This post reflects the most current regulatory guidance available as of May 2026.

What the April 2026 Guidance Actually Says

The revised model risk management guidance (OCC Bulletin 2026-13) does several things simultaneously. It replaces the 2011 framework that many bankers found overly prescriptive. It explicitly acknowledges that community banks should tailor their approach to their size and complexity. And it signals, clearly, that a separate RFI specifically covering AI including generative and agentic AI models is forthcoming.

The core framework in the revised guidance rests on three pillars:

  • Model development and use, including appropriate testing before deployment and documentation of what the model is intended to do and what its limitations are.

  • Model validation and monitoring, including assessment of whether the model performs as intended and ongoing monitoring for performance drift as conditions change.

  • Governance and controls, including defined roles and responsibilities, a maintained model inventory, and documentation sufficient to support oversight.

What changed from 2011: the guidance is now explicitly non-prescriptive. It does not require annual model validation. It does not impose specific validation methodologies. It states directly that model risk management should be commensurate with the bank's risk exposures, business activities, and complexity of model use. For community banks, the OCC separately reinforced this in September 2025 guidance (Bulletin 2025-26): no annual validation requirement, and examiners will not issue negative supervisory feedback solely based on the frequency or scope of validation that the bank reasonably determined to perform.

💡

What did not change: the expectation that someone is responsible for the models the bank uses, that those models are documented, and that the bank can explain what a model does and how it is being supervised. Those expectations survive the rewrite.

What 'Model' Actually Means Under the New Guidance

The revised guidance narrows the definition of 'model' compared to 2011. A model under the new guidance is a complex quantitative method, system, or approach that applies statistical, economic, or financial theories to transform input data into quantitative estimates that inform decisions.

Importantly, generative AI and agentic AI are explicitly excluded from the current guidance's scope. The agencies noted this directly and stated they will address AI specifically in a forthcoming RFI. This matters for community banks because it means tools like AI-assisted draft generation, chatbots, and AI-powered search are not currently subject to formal model risk management requirements under this framework.

What is in scope for community banks today: credit scoring models, loan loss estimation models (including CECL implementations), BSA/AML transaction monitoring systems, fraud detection algorithms, and any quantitative tool that produces risk estimates or decision inputs. If the tool takes data, applies a quantitative method, and produces a number or recommendation that influences a banking decision, it is a model under this framework.

⚠️

Third-party and vendor models are also in scope. When a community bank uses a core provider's built-in credit scoring model or purchases a BSA/AML screening system, the bank is responsible for understanding and governing the model even though it didn't build it. The guidance includes specific expectations for vendor model governance.

What AI Governance Means in Practice for a Community Bank

The phrase 'AI governance' sounds like an enterprise undertaking. For a community bank, it is more accurately described as model risk management scaled to the institution's actual risk profile. Here is what the practical components look like.

The Three Questions Examiners Will Ask

Regardless of how the guidance evolves, community bank examiners are already asking versions of these three questions when AI comes up in an examination. Having documented answers to each is the baseline for adequate AI governance.

What models does your bank use?

This is the inventory question. An examiner should be able to ask this and receive a clear, current list within minutes rather than days. The list should include vendor-provided models, not just internally built ones. For most community banks, the answer includes a credit scoring model, a BSA/AML transaction monitoring system, a fraud detection tool, and likely a CECL model. Some banks have additional models in loan pricing, deposit rate setting, or customer segmentation. A bank that cannot answer this question quickly signals to the examiner that model oversight is informal rather than managed. That impression is harder to correct than the gap itself.

Who is responsible for each model?

Governance requires ownership. For each model on the inventory, a named individual or role should be identified as responsible for understanding what the model does, monitoring its performance, and escalating concerns when something seems off. In a community bank without a dedicated model risk function, this ownership typically falls to the department that relies most heavily on the model's output. The chief credit officer owns the credit scoring model. The BSA officer owns the transaction monitoring system. The CFO or controller owns the CECL model. These responsibilities don't require new hires. They require explicit acknowledgment of existing accountability.

How do you know the model is still working as intended?

This is the monitoring question. It does not require sophisticated analytics. For a credit model, it means periodically comparing credit scores at origination against subsequent performance to assess whether the model's predictive power has been maintained. For a BSA/AML system, it means reviewing whether alert rates and outcomes match expectations. For a fraud detection tool, it means tracking false positive and false negative rates.

💡

Documentation of these reviews, even if the review itself is straightforward, is what distinguishes an institution with a functioning governance process from one that simply trusts the vendor.

The Vendor AI Problem

The governance challenge that most community banks underestimate is third-party AI. Banks are not using AI they built. They are using AI that their core provider, their BSA vendor, their credit bureau, or their fintech partner built and embedded in a product they purchased. The new guidance is clear that vendor model governance is the bank's responsibility. Understanding what a vendor's model does, reviewing the vendor's validation approach, and monitoring whether the model is performing as expected in the bank's specific operating environment are all bank obligations regardless of who built the model.

This creates a specific practical challenge: vendors are often reluctant to share model documentation in detail, citing proprietary concerns. The guidance acknowledges this and indicates that banks should request what documentation is available, conduct what validation is possible given access constraints, and document the effort. Banks that are turned away when asking for model documentation have a defensible position if they document the request and the response. Banks that never asked have a harder conversation with an examiner.

As part of third-party risk management reviews, community banks should add explicit AI governance questions to vendor assessments: What models does this vendor use in the products we purchase? What documentation exists for those models? What validation has the vendor performed? How does the vendor notify us if the model changes significantly?

Generative AI: The Next Wave

The current guidance explicitly excludes generative AI and agentic AI from the model definition. The agencies acknowledged this gap directly and stated a separate RFI on AI is coming. For community banks, this creates a useful window. Generative AI tools are entering community banking faster than the governance frameworks that cover them. Staff are using AI writing assistants. Some banks are piloting AI chatbots for customer inquiry handling. Vendors are embedding generative AI into products that previously had none.

The practical question is not whether to wait for the regulatory RFI before establishing any governance for these tools. It is how to use the existing period to build the habits and documentation practices that will apply once formal expectations arrive. For generative AI specifically, the governance questions parallel the model risk questions but with different emphasis. What data does the tool access? Who is authorized to use it for what purposes? What outputs can and cannot be acted on without human review? How are staff trained on appropriate and inappropriate use? These questions don't require a formal program yet, but beginning to document answers now is substantially less difficult than retrofitting governance after an examination finding.

💡

HOW AI GOVERNANCE CONNECTS TO OPERATIONAL READINESS

Effective AI governance depends on the same operational infrastructure that supports good data management and regulatory reporting: documented processes, clear data sources, defined human review steps, and audit trails on consequential decisions. For community bank operations that are still running key workflows manually, AI governance adds another layer of documentation burden to processes that are already hard to trace. Shore's free CORE Assessment scores your institution's operational readiness across five categories including data readiness and regulatory compliance. It identifies where documentation gaps are concentrated, which is typically where AI governance gaps are too.

 

Frequently Asked Questions

Does the April 2026 guidance require community banks to have a formal AI program?

No. The revised guidance is explicitly principles-based and non-prescriptive. It does not require annual model validation, does not mandate specific governance structures, and states directly that model risk management should be proportionate to the bank's size, complexity, and risk exposure. A community bank with a limited model inventory, a clear owner for each model, and documented periodic reviews is in a reasonable position under the current framework. The expectation scales with the extent and complexity of model use, not with asset size alone.

What about AI tools the bank didn't build, like vendor products?

The bank is responsible for governing the models embedded in the products it purchases. This means understanding what models a vendor uses in products the bank relies on, requesting available documentation, reviewing vendor validation summaries, and monitoring performance in the bank's specific operating environment. Vendors may not fully cooperate with documentation requests, but the bank should request, document the response, and govern based on what is available. Banks that have never asked are in a weaker position than banks that asked, were told the information was proprietary, and documented that limitation.

The BSA/AML system we use came from a vendor. Are we responsible for its model governance?

Yes. Transaction monitoring systems are among the most common AI-adjacent tools in community banking, and they have been addressed in banking regulatory guidance since at least 2021. When the bank relies on the vendor's transaction monitoring model to generate BSA alerts, the bank is accountable for ensuring the model is producing appropriate results in the bank's operating environment. This means reviewing alert rates, false positive rates, and documented outcomes. It also means reviewing any model changes the vendor makes that could affect performance.

How should we document AI governance without adding significant overhead?

Start with what already exists. The model inventory is mostly implicit in existing vendor contracts and system access. Making it explicit requires one organized pass through existing documentation. Ownership assignments are mostly already in place through existing management structures. Making them explicit requires one governance conversation and a brief written record. Monitoring reviews are often already happening informally. Making them explicit requires standardizing what is reviewed and adding a brief documentation step at the end of each review. The overhead of building basic AI governance from existing practices is far lower than building it from scratch after an examination finding.

What should we expect from the forthcoming AI-specific RFI?

The agencies have indicated the RFI will specifically address generative AI and agentic AI, which are excluded from the current guidance. Community banks should expect questions about how they are using generative AI, what governance structures are in place, and what risks they have identified. Banks that have begun documenting their generative AI use and governance practices before the RFI will be better positioned to respond to it and to any subsequent guidance that follows. The RFI is also an opportunity for community banks to provide feedback on what proportionate governance looks like for institutions without dedicated AI teams.

 

Understand Where Your AI Governance Gaps Are Before the Examiner Does

Shore Group's CORE Assessment identifies documentation and data readiness gaps across your operations, which are typically the same areas where AI governance exposure is highest.

TAKE THE CORE ASSESSMENT