Center for Civic Futures and partners commit $8.5M for AI solutions that improve safety net program delivery
We're pleased to announce Eleanor Davis as the new Director of the Public Benefit Innovation Fund (PBIF). She’ll be working with PBIF grantees, open call applicants and governmental and non-governmental partners, supporting the development and deployment of AI-enabled technologies that improve access to essential public benefits like Medicaid, SNAP, and child care.
Davis comes to us from Code for America, where she served as Director of Government Innovation, enabling government agencies to adopt best practices for human-centered digital benefit delivery. In that role, she helped build and lead Code for America's Safety Net Innovation Lab — a landmark initiative backed by $100 million in investment — working with state governments to modernize how programs like Medicaid, Supplemental Nutrition Assistance Program (SNAP), and its supplemental program Woman, Infant, and Children (WIC) reach the people who need them most.
Over her tenure, she developed deep expertise at the intersection of policy, technology, and government service delivery, helping states reduce administrative burdens and close enrollment gaps through iterative, data-informed approaches. Eleanor holds a bachelor’s degree in sociology from the University of Chicago and a master's degree in public health from University of California, Berkeley.
We asked for her thoughts on the state of AI implementation in states today and how she sees programs like PBIF leading the way.
---
You’ve spent years helping states modernize critical safety net systems. You’ve seen both the promise and the fragility of government technology.
AI is accelerating fast. What feels materially different about this moment? And what makes it more consequential than past waves of innovation?
The speed of innovation we’re witnessing is certainly striking. The rate at which new products and tools are taking shape, and the way AI is finding its way into the day-to-day lives of millions of people are both impactful.
But what makes this moment truly consequential is a perfect storm of policy change and technological innovation. There is an explosion of new possibilities driven by emerging technology happening at the exact same time as some of the most consequential policy changes to major safety net programs we've ever seen. New federal requirements are driving dramatic changes to how programs like SNAP and Medicaid operate. The job of delivering these programs has become exponentially more complex, and people applying for benefits now have to navigate new requirements that can take extra time and effort to work through.
So we have government services facing enormous strain, significant delivery challenges on unforgiving timelines, and a massive acceleration in technology-driven innovation — all happening simultaneously. What we don't yet have is clarity and confidence around exactly if and how AI can be applied to these urgent problems to drive real impact.
That's the opportunity of this moment. Can we identify where emerging technology can actually address the most pressing problems? Can we generate real proof points and demonstrate what works in ways that help the entire ecosystem of government service delivery meet this moment?
Government is often told to innovate, but it’s rarely given room to experiment.
In an environment like this, what does it actually take to build real infrastructure around experimentation? How do you create the conditions to test, learn, and adjust without putting public systems at risk?
The way I think about the role of PBIF is that we exist to help government de-risk experimentation. Experimentation is a critical part of the innovation process — it's the only way you learn what works.
But government systems are not set up to enable and support experimentation. Government has to serve everyone, and that constraint shapes everything, including the risk calculus. Procurement rules, backend systems logic, and governance processes are all designed to reduce risk, and they also make it very challenging to build and test technology in an iterative, agile way.
What external entities can do is absorb that risk on government's behalf. We can help create the conditions where testing and learning can happen safely — through funding that allows states to build and try new things, through technical assistance and hands-on support, and through connecting people to learnings and re-connecting those learnings back to the work. We can create sandboxes for learning in small, fast ways. And once something shows promise in that contained environment, we can help scale it.
That's the model: de-risk first, then scale.
You’ll be leading a community of teams experimenting at the edge of public-sector technology.
What possibilities does this community hold — not just for better tools, but for reshaping how states communicate, collaborate, and learn from one another? And how do you envision it impacting the public sector at large?
One of the most exciting things about the Public Benefit Innovation Fund is that we're not only providing funding — we're creating a container for learning. PBIF is a living lab, and the purpose of a lab is to generate knowledge.
This moment calls out for that knowledge. Not abstract or theoretical — practical, replicable learnings that tell us what actually solves urgent problems for people right now. If we can generate those learnings in real government environments, it can help move the whole field faster. Cross-state replicability, faster implementation, a growing body of proof points the entire ecosystem can build on. That's a public good, even for teams who aren't directly part of the cohort.
What also excites me is who this community brings together. There's a community of builders at the forefront of AI, and there are policy experts who deeply understand the constraints of the regulatory environment. Historically, these people have operated in separate ecosystems. PBIF is bringing them together to maximize collective expertise and build something genuinely greater than the sum of its parts.
Underneath all of it is real urgency. Government teams are strapped for resources. Administrative burdens are real. The consequences of inaction are significant. Technology alone can't solve these problems, but it has the potential to be a meaningful lever for reducing friction. The question we're asking is: what consequences are we no longer willing to accept? I think the answer to that question is what will determine how bold we need to be.
.png)
.png)
.png)
