Eleanor Davis Joins Center for Civic Futures as Director of the Public Benefit Innovation Fund
We’re proud to announce Jessica Lax as our new Program Director for our State AI Readiness project, leading a community of practice spanning 40+ states and territories.
She’ll be working directly with Chief AI Officers, policy experts, academics, and more to both develop our community of practice and generate learnings that benefit states and localities across the country.
Lax most recently served as the New Jersey Innovation Authority's first Senior Advisor for Responsible AI, where she spearheaded the office's generative AI consulting portfolio, deploying emerging technology across agencies to improve government effectiveness and public services. The practice built AI tools that made certain tasks 85-99% faster. Earlier, she served as Director of Business Experience, working with multiple state agencies to consolidate fragmented government permitting systems into a one-stop shop, which made it 30% faster to start a business in New Jersey.
A public sector strategist, Lax draws on human-centered design and interdisciplinary collaboration across public, private, and nonprofit sectors — including prior work with the National Park Service, the New York City Economic Development Corporation, and the Kresge Foundation. She holds a master's degree in urban planning from Hunter College and a bachelor's degree in environmental studies from the University of Michigan.
Here, she shares her perspectives about the critical challenges states face in adopting new technologies, breaking down information silos, and the positive potential of shared learnings.
---
Having helped lead innovation efforts inside state government, you’ve seen how new technology becomes embedded in public systems.
As AI adoption accelerates, what kind of leadership does this moment call for from states? What do you think will distinguish states that truly shape their AI future from those that simply react to it?
The states that shape their own AI future will have bold, empathetic leaders at the helm. They'll need to invest in their people, fail quickly, and share ideas shamelessly. When one state figures something out, every state should benefit, and that's exactly why organizations like Center for Civic Futures matter.
I say bold because the window to get ahead is narrowing, especially for government agencies that can be structurally or culturally-wired to move slowly. Privacy concerns are mounting, workloads are compounding as AI bots increasingly act on behalf of people, and much of the previously unknown effects of AI adoption are becoming tangible.
But while AI can make us more efficient, our biggest asset as humans is our humanity — knowing when to use these tools and when not to, how to bring staff along, and how to engage with AI in a way that reflects our values and the communities we serve.
To any hesitant government leader: you don't need to know everything. Join a community of practice and leverage those connections. I'm genuinely excited about this role, because I know there are already bold and empathetic leaders doing this work, and I can't wait to amplify what they're already doing.
There’s no shortage of headlines about AI right now. New tools, new risks, new announcements almost daily.
But as AI becomes part of everyday operations in states, what aren’t we talking about enough? Or where should leaders be pushing the conversation further?
I'm somewhat obsessed with the question of who. Not who's adopting AI — but who got to shape it. The tools states are deploying right now carry the assumptions, priorities, and blind spots of the small number of companies that built them. The choices built into how these tools think are now embedded in systems that decide who gets benefits, who gets flagged, who gets a second look.
Think about social media: algorithmic bias didn't hurt everyone equally. It amplified existing inequalities — misinformation hit under-resourced communities hardest, content moderation failed most for non-English speakers, and platform design choices disproportionately affected young people with the least power to push back. Government AI is following the same pattern. The communities most affected by automated decisions in public services — lower-income, rural, communities of color — are also the ones with the least visibility into how those decisions get made.
Some states are starting to build this muscle. In Colorado, deployers of high-risk AI systems complete annual impact assessments that specifically analyze risks of algorithmic discrimination — with developers providing the documentation needed to make those assessments meaningful. Georgia's procurement guidelines go a step further operationally, with diverse evaluation committees, standardized scoring rubrics for AI-related risks, and vendor-provided bias testing results reviewed against real-world performance across demographics.
The public deserves to understand how these tools are deployed, what ethical frameworks guide them, and what rights and protections they have. How states shape industry through procurement and policy is a space I'm excited to dig into through this work.
In your new role, you’ll be leading a fast-growing community of AI leaders across more than 40 states and territories — many of whom are navigating similar challenges in very different contexts.
What do you see as the unrealized potential of this community and how does your perspective shift as you move from participant to steward?
A successful community of practice provides states with the shared tools and learnings needed to start tackling the longstanding obstacles in benefit service delivery. When states share what's working, learn from each other's failures, and stop reinventing the wheel in isolation, that's one of the most practical tools we have to close the gaps in resources and capacity. That potential is real and still largely unrealized.
To get there, the community needs to be more than a knowledge exchange. I see four areas where we can move the needle: testing policies around AI's impacts on workforce; tackling tangible service delivery problems like SNAP error rates or permit simplification; doing the unglamorous infrastructure work around data and legacy systems; and sending clear signals to the AI industry about what government and citizens actually need.
None of that happens without trust, and trust requires a genuine safe space to test bold ideas, share learnings openly, and fail with limited consequence.
As a contributor, I could show up with my own agenda and let the group challenge me. As steward, I have to hold a clear point of view while genuinely reflecting the needs of members navigating very different realities. If I can get that balance right, this community becomes something greater than the sum of its parts — and the people navigating government services on the other side of this work will feel it.
.png)
.png)
.png)
.png)