Dean W. Ball

Summary

Dean Woodley Ball is currently a senior fellow at the Foundation for American Innovation (FAI).

Ball was a Research Fellow in the Artificial Intelligence & Progress Project at George Mason University’s Mercatus Center, a Policy Fellow at Fathom, a Nonresident Senior Fellow at the Foundation for American Innovation, and author of Hyperdimensional.

His work focuses on emerging technologies and the future of governance. He has written on topics including artificial intelligence, the future of manufacturing, neural technology, bioengineering, technology policy, political theory, public finance, urban infrastructure, and prisoner re-entry.

Source: Website

OnAir Post: Dean W. Ball

News

The AI Patchwork Emerges An update on state AI law in 2026 (so far)
Hyperdimensional, Dean W. BallJanuary 15, 2026

State legislative sessions are kicking into gear, and that means a flurry of AI laws are already under consideration across America. In prior years, the headline number of introduced state AI laws has been large: famously, 2025 saw over 1,000 state bills related to AI in some way. But as I pointed out, the vast majority of those laws were harmless: creating committees to study some aspect of AI and make policy recommendations, imposing liability on individuals who distribute AI-generated child pornography, and other largely non-problematic bills. The number of genuinely substantive bills—the kind that impose novel regulations on AI development or diffusion—was relatively small.

In 2026, this is no longer the case: there are now numerous substantive state AI bills floating around covering liability, algorithmic pricing, transparency, companion chatbots, child safety, occupational licensing, and more. In previous years, it was possible for me to independently cover most, if not all, of the interesting state AI bills at the level of rigor I expect of myself, and that my readers expect of me. This is no longer the case. There are simply too many of them.

It’s not just the topics that vary. It’s also the approaches different bills take to each topic. There is not one “algorithmic pricing” or “AI transparency” framework; there are several of each.

On AI and Children: Five-and-a-half conjectures
Hyperdimesional, Dean W. BallJanuary 22, 2026

Introduction

The first societal harms of language models did not involve bioattacks, chemical weapons development, autonomous cyberattacks, or any of the other exotic flavors of risk focused on by AI safety researchers. Instead, the first harms of generalist artificial intelligence were decidedly more familiar, though no less tragic: teenage suicide. Very few incidents provoke public outcry as readily as harm to children (rightly so), especially when the harm is perceived (rightly or wrongly) to be caused by large corporations chasing profit.

It is therefore no surprise that child safety is one of the most active areas of AI policymaking in the United States. Last year saw dozens of AI child safety laws introduced in states, and this year will likely see well over one hundred such laws. In broad strokes, this is sensible: like all information technologies, AI is a cognitive tool—and children’s minds are more vulnerable than the minds of adults. The early regulations of the internet were also largely passed with the safety of children in mind.

Despite the focus on this issue by policymakers (or perhaps because of it), there is a great deal of confusion as well. In recent months, I have seen friends and colleagues make overbroad statements like, “AI is harmful for children,” or “chatbots are causing a major decline in child mental health.” And of course, there are political actors who recognize this confusion—along with the emotional salience of the topic—and seek to exploit these facts for their own ends (some of those actors are merely self-interested; others understand themselves to be fighting a broader war against AI and associated technologies, and see the child safety issue as a useful entry point for their general point of view).

Among the Agents: How I use coding agents, and what I think they mean
Hyperdimesional, Dean W. BallJanuary 8, 2026

Of course, I did not do these things alone. I did them in collaboration with coding agents like Gemini 3 Pro (and the Gemini Command-Line Interface system)OpenAI Codex using GPT 5.2, and most especially, Claude Opus 4.5 in Claude Code.

These agents have been around for almost a year now, but in recent weeks and months they have become so capable that I believe they meet some definitions of “artificial general intelligence.” Yet the world is mostly unchanged. This is because AGI is not the end of the AI story, but something closer to the beginning. Earlier this year, I wrote:

The creation of “artificial general intelligence,” if it can even be coherently defined, is not the end of a race. If anything, it is the start of a race. As AI systems advance by the month, the hard work of building the future with them grows ever more pressing. There is no use in building advanced AI without using those systems to transform business, reinvent science, and forge new institutions of governance. This, rather than the mere construction of data centers or training of AI systems, is the true competition we face—and our work begins now.

The individuals and firms that discover more and better ways to work with this strange new technology will be the ones who thrive in this era. The countries where those people and businesses are most numerous will be the countries that “win” in AI. It is up to all of us, together, to figure out how to put machine intelligence to its highest and best uses. The world won’t change until human beings change it.

Dice in the Air: A look back at 2025, and a look ahead
Hyperdimesional, Dean W. BallDecember 19, 2025

Has my work been too laissez-faire or too technocratic? Have I failed to grasp some fundamental insight? Have I, in the mad rush to develop my thinking across so many areas of policy, forgotten some insight that I once had? I do not know. The dice are still in the air.

One year ago my workflow was not that different than it had been in 2015 or 2020. In the past year it has been transformed twice. Today, a typical morning looks like this: I sit down at my computer with a cup of coffee. I’ll often start by asking Gemini 3 Deep Think and GPT-5.2 Pro to take a stab at some of the toughest questions on my mind that morning, “thinking,” as they do, for 20 minutes or longer. While they do that, I’ll read the news (usually from email newsletters, though increasingly from OpenAI’s Pulse feature as well). I may see a few topics that require additional context and quickly get that context from a model like Gemini 3 Pro or Claude Sonnet 4.5. Other topics inspire deeper research questions, and in those cases I’ll often designate a Deep Research agent. If I believe a question can be addressed through easily accessible datasets, I’ll spin up a coding agent and have it download those datasets and perform statistical analysis that would have taken a human researcher at least a day but that it will perform in half an hour.

Around this time, a custom data pipeline “I” have built to ingest all state legislative and executive branch AI policy moves produces a custom report tailored precisely to my interests. Claude Code is in the background, making steady progress on more complex projects.

Foundation for American Innovation (FAI)
Foundation for American Innovation , Zach GravesAugust 11, 2025

The Foundation for American Innovation (FAI) today announces the addition of Dean Ball as Senior Fellow. He will focus on artificial intelligence policy, as well as developing novel governance models for emerging technologies.

Ball joins FAI after having served as Senior Policy Advisor for Artificial Intelligence and Emerging Technology in the White House Office of Science and Technology Policy (OSTP). He played a key role in drafting President Trump’s ambitious AI Action Plan, which drew widespread praise for its scope, rigor, and vision.

“We are thrilled to have Dean rejoin the team,” said Foundation for American Innovation Executive Director Zach Graves. “He’s a brilliant and singular talent, and we look forward to collaborating with him to advance FAI’s optimistic vision of the future, in which technology is aligned to serve human ends: promoting individual freedom, supporting strong institutions, advancing national security, and unleashing economic prosperity.”

Prior to his position with OSTP, Ball worked for the Hoover Institution, the Manhattan Institute, the Mercatus Center, and the Calvin Coolidge Presidential Foundation, among other positions.

“President Trump’s AI Action Plan represents the most ambitious U.S. technology policy agenda in decades,” said Ball. “After the professional honor of a lifetime serving in the administration, I’m looking forward to continuing my research and writing charting the frontier of AI policy at FAI.”

He serves on the Board of Directors of the Alexander Hamilton Institute and was selected as an Aspen Ideas Fellow. He previously served as Secretary, Treasurer, and trustee of the Scala Foundation in Princeton, New Jersey and on the Advisory Council of the Krach Institute for Tech Diplomacy at Purdue University. He is author of the prominent Substack Hyperdimensional.

The Foundation for American Innovation is a think tank that develops technology, talent, and ideas to support a better, freer, and more abundant future. Learn more at thefai.org.

Where We Are Headed
Hyperdimensional, Dean W. BallMarch 27, 2025

The Coming of Agents
First thing’s first: eject the concept of a chatbot from your mind. Eject image generators, deepfakes, and the like. Eject social media algorithms. Eject the algorithm your insurance company uses to assess claims for fraud potential. I am not talking, especially, about any of those things.

Instead, I’m talking about agents. Simply put and in at least the near term, agents will be LLMs configured in such a way that they can plan, reason, and execute intellectual labor. They will be able to use, modify, and build software tools, obtain information from the internet, and communicate with both humans (using email, messaging apps, and chatbot interfaces) and with other agents. These abstract tasks do not constitute everything a knowledge worker does, but they constitute a very large fraction of what the average knowledge worker spends their day doing.

Agents are starting to work. They’re going to get much better. There are many reasons this is true, but the biggest one is the reinforcement learning-based approach OpenAI pioneered with their o1 models, and which every other player in the industry either has or is building. The most informative paper to read about how this broad approach works is DeepSeek’s r1 technical report.

How Should AI Liability Work? (Part I) The “Race to The Top”
Hyperdimensional, Dean W. BallFebruary 20, 2025

During the SB 1047 debate, I noticed that there was a great deal of confusion—my own included—about liability. Why is it precisely that software seems, for the most part, to evade America’s famously capacious notions of liability? Why does America have such an expansive liability system in the first place? What is “reasonable care,” after all? Is AI, being software, free from liability exposure today unless an intrusive legislator decides to change the status quo (preview: the answer to this one is “no”)? How does liability for AI work today, and how should it work? It turned out that to answer those questions I had to trace the history of American liability from the late 19th century to the present day.

Answering the questions above has been a journey. This week and next, I’d like to tell you what I’ve found so far. This week’s essay will tell the story of how we got to where we are, a story that has fascinating parallels to current discussions about the need for liability in AI. Next week’s essay will deal with how the American liability system, unchecked, could subsume AI, and what I believe should be done.

About

Source: Website

His work has appeared in National Affairs, The New Atlantis, Pirate Wires, Discourse Magazine, Understanding AI, AI Supremacy, The Dispatch, The Hill, Tech Policy Press, the Washington Post, the Orange County Register, the Coolidge Quarterly, National Review, and other outlets. He has appeared on CNN, C-SPAN, and many podcasts, and is the host of the AI Summer podcast with Timothy B. Lee. His paper “Neither Harbour nor Floor: Contemplating the Singularity with Michael Oakeshott” will be part of a forthcoming volume titled Liberalism Revisited, to be published by Palgrave. He is also the author of “Ideas of Another Order: Michael Oakeshott and Confucius in Conversation,” an essay in comparative political theory that was published in Collingwood and British Idealism Studies.

Additional Background

Before he joined Mercatus, Dean was Senior Program Manager for the State and Local Governance Initiative at Stanford University’s Hoover Institution, where he managed a research program intended to deliver rigorous and evidence-based public policy research to state and local governments across the country, with a special emphasis on economic development, workforce training, and tax policy.

Prior to that role, he served as Executive Director of the Calvin Coolidge Presidential Foundation, based in Plymouth, Vermont and Washington, D.C. In that capacity, he oversaw the Coolidge Scholarship, a full-ride, merit-based undergraduate program that is among the most competitive and prestigious scholarships in the United States, as well as a nationwide middle and high school debate program, the Coolidge Senators program for undergraduates, and a variety of historical, archival, and educational initiatives.

He served as the Deputy Director of State and Local Policy and Manager for Special Projects at the Manhattan Institute for Policy Research from 2014–2018, and as director of the Adam Smith Society from 2018–2020. He oversaw the Institute’s Hayek Book Prize, one of the most financially generous book prizes in the world.

He has also worked as an independent consultant, allowing him to focus on projects near and dear to his heart. These have included on-the-ground efforts to reform policing in Argentina and Chile and to recreate, at small scale, the Florentine guild system for sacred liturgical art.

Dean serves on the Board of Directors of the Alexander Hamilton Institute and on the Advisory Council of the Krach Institute for Tech Diplomacy at Purdue University. He previously served as Secretary, Treasurer, and trustee of the Scala Foundation in Princeton, New Jersey. In 2024, he was selected as a Fellow in the Roots of Progress Institute’s Blog-Building Initiative.

He graduated magna cum laude from Hamilton College in 2014 with a B.A. in History, and currently lives in Washington, D.C. with his wife Abigail and their two cats, Io and Ganymede.

Videos

Navigating the AI Revolution with Dean Ball

February 6, 2025 (47:28)
By: Let People Prosper Show with Dr. Vance Ginn

In this conversation, Dean Ball and I explore the transformative potential of artificial intelligence (AI) and its implications for society, economy, and governance. Dean is a senior fellow at the Mercatus Center at George Mason University. He shares his motivations for engaging with AI, his journey into the field, and the misconceptions surrounding it.

We discuss the historical context of technological advancements, the impact of AI on labor markets, and the regulatory challenges that arise as states like Texas introduce new frameworks for AI governance. Dean emphasizes the need for a balanced approach to regulation that fosters innovation while addressing potential risks and the connection with energy abundance.

AGI Lab Transparency Requirements & Whistleblower Protections

November 12, 2024 (01:59:00)
By: Cognitive Revolution “How AI Changes Everything”

In this episode of The Cognitive Revolution, Nathan explores AI forecasting and AGI Lab oversight with Dean W. Ball and Daniel Kokotajlo. They discuss four proposed requirements for frontier AI developers, focusing on transparency and whistleblower protections. Daniel shares insights from his experience at OpenAI, while Dean offers his perspective as a frequent guest. Join us for a compelling conversation on concrete AI governance proposals and the importance of collaboration across political lines in shaping the future of AI development.

Writing & Media

Source: none

New Legislation

Politics

Source: none

New Legislation

Skip to toolbar