From HAL to Helpful: Appreciating How AI Reshapes Power and Knowledge

Paper Published in Science Provides New Understanding of AI Models

Henry Farrell, SNF Agora Institute Professor of International Affairs at the Johns Hopkins School of Advanced International Studies (SAIS), along with co-authors Alison Gopnik (University of California, Berkeley), Cosma Shalizi (Carnegie Mellon University), and James Evans (University of Chicago), challenge the idea that large AI models are becoming independent, intelligent agents in a new paper published in Science. Instead, they argue that AI functions as a cultural and social tool, much like writing, printing, markets, and bureaucracies.

Farrell and his co-authors explain that large AI models do not “think” or “understand” the world as people do. Instead, they process vast amounts of human-created information and organize it to make it more accessible. Like earlier technologies that transformed communication and knowledge-sharing, AI influences politics, business, and decision-making.

Farrell sees this article as a crucial step in shifting how people talk about AI. “We need to stop imagining AI as super-powered individual intelligences and start seeing it for what it is, a system that reorganizes information and power” he says. “When we compare large models to economic markets or government systems instead of spinning out speculative science fictional scenarios, we can ask more useful questions. Who controls them? How do they shape our understanding of the world? How do they shift influence and decision-making?”

Many discussions about AI focus on future threats, such as the possibility that machines will surpass human intelligence. Farrell and his co-authors argue that these debates overlook the real issues AI already creates, including misinformation, bias, and the concentration of power among a small number of tech companies. They warn that history demonstrates how powerful technologies often increase inequality unless people take steps to manage them.

The authors compare AI to past systems that shape decision-making. Markets and bureaucracies have long helped societies organize information, but they also simplify complex realities. For example, market prices summarize vast economic data into a single number, making trade easier but omitting critical details. AI models function similarly, creating condensed versions of human knowledge that lack nuance and context.

Farrell and his co-authors also highlight how AI deepens existing inequalities. These systems determine which facts gain visibility and which voices receive attention, often reinforcing the power of those in control. If left unchecked, AI could consolidate influence among a small group of corporations, just as industrialization once concentrated wealth and decision-making among the few.

Hahrie Han, professor of political science and inaugural director of the Stavros Niarchos Foundation (SNF) Agora Institute at Johns Hopkins, emphasizes the importance of Farrell’s research in understanding modern democracy. “This article is exactly the kind of research we need to make sense of how new technology shapes public life,” she says. “At SNF Agora, we work to create spaces for people to engage in dialogue and take action together. Henry’s work helps us understand how AI influences the way people learn, share ideas, and make decisions. If we want to strengthen democracy, we must understand how technology is changing how we engage with knowledge and power.”

Farrell and his co-authors call for an interdisciplinary approach to AI research that integrates social science, history, and economics with computer science. They argue that the most important question is not whether AI will replace human intelligence but how it transforms knowledge, decision-making, and power. Their work challenges researchers, policymakers, and the public to rethink AI’s role in today’s world.