Back to Insights

Why We Built Prism Layer: A Problem Hidden in Plain Sight

Female founder seen through glass office window writing on a whiteboard with Chicago skyline reflected

The problem that led to Prism Layer was not obvious until you had seen it from the inside. From the outside, financial AI looked like it was working. Credit decisions were being automated. Fraud detection systems were deployed. Machine learning was running in production at major institutions. The narrative was one of transformation already underway.

From the inside — from the perspective of the risk teams trying to operate these systems, the compliance officers trying to defend them to regulators, and the engineers trying to maintain them — the picture looked very different. The automation was real. The accountability was not.

What We Kept Seeing

Before founding Prism Layer, I spent years working at the intersection of risk technology and financial operations. The pattern I kept encountering was consistent across institutions of different sizes, different business models, and different technology sophistication levels.

A team would deploy an AI-based credit or fraud decisioning system. It would work. Approval rates would improve. Fraud losses would decline. The system would become a core part of operations. And then the first time a regulator asked a detailed question about a specific decision, or the first time the model started behaving unexpectedly and no one could explain why, or the first time they needed to update the model and discovered that the documentation of what the production model was actually doing was incomplete — that was when the brittleness became visible.

The problem was not that the models were bad. In most cases the models were genuinely good. The problem was that the infrastructure surrounding the models had not been designed for accountability. It had been designed for deployment.

The Gap in the Market

When we looked at what was available to solve this problem, the options were unsatisfying. Incumbent risk platform vendors offered comprehensive systems that were built for a regulatory environment of a decade ago and were not designed for AI-first architectures. Point solution AI vendors offered excellent models but minimal governance infrastructure. Building custom solutions required assembling multiple components that were not designed to work together, and that created exactly the kind of fragmented audit trail that made accountability difficult.

What did not exist was a purpose-built platform that treated the accountability layer as the primary product, with the AI reasoning engine designed around it rather than bolted on afterward. The reasoning engine and the audit infrastructure needed to be co-designed from the beginning, not integrated later.

Why Chicago

Prism Layer is based in Chicago for reasons that are more than personal preference. Chicago is the center of gravity for US financial services risk. The commodity trading firms that built some of the earliest quantitative risk systems are here. The bank and insurance company headquarters that represent the core market for what we are building are here or within easy reach. And the talent pool of risk scientists, financial engineers, and compliance professionals who understand the problem we are solving at a deep level is concentrated here in ways that are not replicated in other markets.

Building a company in the city where the problem lives keeps us close to the customers who can tell us whether we are solving it correctly.

What We Set Out to Build

The design principle at the core of Prism Layer is simple: every decision should be explainable, every model change should be documented, and every compliance requirement should be enforced by the system rather than by process. The platform should make accountability the path of least resistance, not the hard thing that requires extra effort.

We are still building toward that vision. The platform is in production, the core capabilities are real, and the customers who are using it are getting results. But the problem is large enough that there is meaningful work ahead of us. That is not a discouraging thing. It is the reason the company exists.

If you are working on the same problem — trying to operate AI risk systems in regulated environments and finding that the accountability infrastructure is not keeping up — we would like to talk.

Previous
Comparing Risk Stack Architectures: Rules Engine vs. ML vs. Hybrid
Next
What AI-Driven Risk Decisioning Actually Means for Lenders in 2026

Ready to Put AI at the Center of Your Risk Stack?

Talk to the Prism Layer team about your use case. We built this for the problem you are solving.

Get in Touch