Is AI an asset to David or to Goliath? An interview with the public historian Jason Steinhauer
The future seems increasingly shaped by artificial intelligence. AI can be seen as a tool for democratizing knowledge, but who really benefits from these tools? Who gets to shape the narrative? Jason Steinhauer researches the impact of technology on society, on how we understand ourselves, our past, and the very fabric of truth. In this interview, he sends out a warning: AI is shaped primarily by commercial imperatives. Without meaningful input from historians, scholars and ethicists, this powerful technology risks reinforcing the inequalities and distortions already embedded in the algorithms that drive much of what we read or watch. Our digital future may depend on whether we treat AI as an inevitability or as a human creation that we can question and redirect.

As a historian, how did you first get interested in the world of technology?
My background is in the humanities. I actually started out in the museum world: I used to curate museum exhibits. So I come from the world of public history. My world has always been very public-facing. I’ve been interested in how general audiences interact with history, learn about the past, and what the consequences of their understandings or misunderstandings of the past may be.
In particular, I've become very interested in the effect that technology has on people's understandings of the past and of history. I wrote a book about how social media shapes what people know, or what they think they know, about history. And more recently, I’ve been exploring how artificial intelligence can or may shape people's perceptions of the past and also their perceptions of what they may know about the world more broadly.
How did you realize that it was worth turning to artificial intelligence in your work?
I became interested in this topic while working at the Library of Congress, the largest library in the world with over 170 million items. It was striking that people trusted information on their phones more than the vast collections in the library. This made me curious about why some historical narratives become popular on social media while others don’t. Why do we give so much weight and trust to the information we encounter on our devices? And what are the consequences of doing that?
While working on my 2021 book, History, Disrupted: How Social Media and the World Wide Web Have Changed the Past, I interviewed many people from Silicon Valley and noticed that they were increasingly focused on artificial intelligence rather than social media. That made me realize that AI was becoming a major issue, and I’ve been studying it ever since – for about five years now.
What role do you see for the humanities in shaping the ethics and policies surrounding artificial intelligence?
I don’t see artificial intelligence as a particular piece of software or a tangible object. I see AI as a goal. More specifically, it's the real-world manifestation of the goal to use human-designed technological systems to simulate intelligence.
Often, that means simulating human intelligence – but not always. There are programs, for instance, that aim to simulate the intelligence of whales or dolphins. And it’s easy to imagine a future where we might try to model extraterrestrial or alien forms of intelligence. So it doesn’t have to be human intelligence, even though that’s been the main focus so far.
If AI isn’t a concrete "thing" but rather a goal we’re actively working toward, then we have to ask: Who is shaping that goal? What is driving it?
This goal has largely been shaped by commercial imperatives, scientific interests, and technological possibilities, rather than by philosophical or humanistic imperatives. Of course, you could make an argument that if we expand the idea of who counts as a philosopher or a humanist, or a historian, the picture gets a little more complex. You could argue, for example, that someone like Stanley Kubrick, with the film 2001: A Space Odyssey and his AI version of HAL, was a kind of philosopher and humanist. He obviously has had an impact on the way AI has been shaped and developed.
My sense is that the role of humanities scholars, historians, philosophers and ethicists has largely been reactive. Basically, advances are made towards this goal of manifesting real-world intelligence via machinery, and then humanists, philosophers and ethicists respond to those advances in some way. Usually those responses express some concern or hesitancy, or try to encourage technology companies or commercial entities to do things a little bit differently.
I don't get the sense that many humanists are in the room with entrepreneurs, investors, or technologists when decisions are being made about how specific applications are built or about the assumptions that go into them.
I think all of us in the humanities like to think that would be possible. But when the imperative is largely commercial – i.e., to increase efficiency and speed to make things more efficient, so that you can have fewer employees and more annual return – it doesn't leave a lot of space for humanistic inquiry and critical thinking.
That’s why I think the role of the technological critic, or at least the person trying to help others “think around corners” when it comes to AI, is very important. But it hasn't really slowed the technology down, which then calls into question how much efficacy we actually have. If there are ways that we can be more proactively involved, as opposed to just reactively involved, that would be great.
Back when AI first started appearing – even just in science fiction, or in the early glimpses of machine intelligence – there was a hope that maybe we were on the verge of solving something profound. This technology might provide a shortcut to finally unlocking the riddle of the human mind, the mystery of consciousness, or the nature of knowledge. But, as you pointed out, neither philosophers nor humanists got there first. Corporations did. Instead of helping us solve “the ultimate question of life, the universe, and everything”, as Douglas Adams put it, algorithms are built to boost customer engagement and optimize profits.
My inclination is to say that all knowledge has to be historically located. We know certain things because we exist in specific time periods and because certain things have been passed down to us. Certain influences and circumstances in our lives also contribute to the totality of what we think we know.
People in the 17th century knew different things than we know in the 21st century. And we know things in the 21st century that they could never have known. But I agree with you that the way artificial intelligence has developed and evolved, at least in the commercial space, is not like The Jetsons or 2001: A Space Odyssey. It’s more like Microsoft Word on steroids. There is, of course, a place for AI-powered technology to generate transcripts so that humans don’t have to do it. The innovations and elements of AI that are creeping into all of our lives on a regular basis aren’t necessarily poetic or exciting, and they may not get us closer to answers to deep existential questions, but they do have practical applications.
The larger questions AI poses, such as how it reinforces inequities and biases, and the environmental consequences of building out AI infrastructure are harder. Those are the types of questions that would benefit from historians, philosophers and humanists getting involved.
However, once you start digging into these questions, they become very difficult and murky for corporations and governments to grapple with. One of the challenges we all face is: How deeply do we want to ask questions about this technology? How critical will we allow ourselves to be? Will we allow these technologies to perpetuate existing inequalities and inefficiencies, or will we try to engineer them to solve those persistent problems? That remains an open question.
How does the development of AI affect the professional responsibility of historians in today's information environment?
The responsibility of professional historians remains the same: to accurately and honestly understand what happened in the past, why it happened, and how it affects us today. With AI, this responsibility becomes even more important, since AI can generate and spread misinformation and propaganda at unprecedented speed.
Is there still a market for professional historians? Will there be funding to enable that type of work? Or is the general public very happy to live in a world where the propaganda and disinformation that they see simply aligns with what they want to believe, regardless of whether it's accurate or not?
It’s crucial to uphold accuracy and truthfulness. Unfortunately, in recent years, funding for history, journalism and the humanities has declined significantly. Without strong investment in media literacy and education, propaganda can spread quickly and become accepted as truth. Recommendation algorithms worsen this by reinforcing what grabs our attention, creating feedback loops that can push people toward extremist or conspiratorial views, which have real-world consequences.
This is a huge responsibility, but historians and humanists cannot address this alone. It requires broad coalitions, public and philanthropic funding, and a public that values accurate, informed knowledge.
Can artificial intelligence help establish ethical norms or counter disinformation, and who actually holds power over AI tools today?
The answer is unknown at this point.There are certainly scholars who are trying to use artificial intelligence in that respect. I've been thinking a lot about whether AI, as it's currently constructed, is more of an asset to David or more of an asset to Goliath. And my instinct at the moment is to say that it's more of an asset to Goliath.
AI has the power to reinforce rather than undermine inequities – in part because, while individual applications of AI may be inexpensive, the actual infrastructure of AI is hugely expensive. So only a handful of countries are actually able to sustain it: the US, China, and a couple of others.
The data servers, the cooling, the resources, the chips, the stack, the engineering talent – all of that is hugely expensive. The concentration of power in the creation of AI has really settled at the top. And those major players and major actors are dictating how things are functioning all the way down, all the way through.
I worry about how that dynamic will play out when it comes to things like disinformation. Russia, for example, spends anywhere between one and two billion dollars per year on disinformation and propaganda.Take that imposing figure and the infrastructure they’ve built over twenty years, add artificial intelligence to that, and it seems like it might be even more daunting to counteract. If they’re Goliath, what chance does that give David?
Terrorist organizations around the world right now have “Chief AI Officers”. That thought alone should be very sobering for all of us. And many of these terrorist organizations are very well financed – either by rogue state actors, with cryptocurrency, or any number of different ways.
Individual historians and scholars are experimenting with AI and its applications, trying to use them in interesting ways, to further the values and goals of those professions. I just worry that those efforts will pale in comparison to the nation-states, organizations and corporations that already have much more power, bandwidth and resources – and who will only grow stronger and more robust with these AI applications at their disposal.
The agency and responsibility of artificial intelligence are much discussed. Traditionally, when an accident happens, say, when someone is killed on a factory floor by a machine, we don’t blame the machine itself. Responsibility usually falls on the people who built, maintained or supervised it. The machine is seen as a tool, not as an agent. But what happens when AI goes beyond mechanical function and begins to influence human decision-making? For example, if an AI system somehow convinces someone to harm themselves. Can we meaningfully talk about AI being responsible for that outcome? And if so, what does that actually mean when AI can’t be punished or rewarded like a human being?
These are hard questions. Part of me is a bit cautious about anthropomorphizing AI. I think concepts like agency belong to human beings (or you can extend that to living, sentient beings). AI is not that. It’s not a living, sentient being.
Going back to my definition of artificial intelligence, I see it as a goal. The goal is using technological parts – chips, computers, CPUs, data farms – to simulate human intelligence. It seems to me that the responsibility lies with the human beings who are striving for that goal. If you’re striving for the goal of using AI to make cars drive by themselves in a way that resembles how humans drive, then the responsibility lies with the company or the technologists who are pursuing that goal, not with the car itself.
I don’t think we can let human beings off the hook as AI applications continue to develop in sophistication and abilities. In fact, it’s probably the opposite. We probably need to hold humans more responsible. That way we get a more ethical AI future than the one that we’re currently shaping up to get.
In everyday life, it can be tempting to forget that AI like ChatGPT is just a machine. Even when we know this, we often find ourselves saying “please” and “thank you”, as if we’re having a real conversation. The way AI uses language and constructs rational sentences makes it seem human. Psychologically, it becomes difficult to treat it as just a piece of technology.
That's part of its allure. This is also part of the AI literacy and media literacy challenges that we face. There’s a need to remind people about what's actually going on below the surface, and to teach people how these applications work to the best of our abilities. We need to give people a little bit of “X-ray vision” to allow them to see through and behind what's being presented to them at face value. This is a big contribution that humanists, historians and philosophers make.
Do you see a risk that AI systems might overshadow or displace other ways of understanding the world? Like an “oracle” or all-knowing entity that limits alternative perspectives? It seems like we don’t even google things anymore, we just ask AI directly.
OpenAI would love it if everybody just used ChatGPT and stopped using Google. I think that displacing that market share is what they have their sights on. It’s worth recognizing just how expansive AI has become. We tend to associate it primarily with large language models and tools like ChatGPT, but AI is being used everywhere – in healthcare, air travel, education, government, finance. We often think of AI too narrowly, especially in contexts where we're focused on humanistic knowledge.
That said, I don’t think there will be one single source of truth – some supreme intelligence at the center of the universe broadcasting infallible knowledge for us to blindly accept. The questions surrounding AI become more compelling, the broader and more detailed the context becomes. Take air traffic control: what are the consequences of airline companies using AI to plan routes? Could it improve passenger safety, increase fuel efficiency, reduce CO₂ emissions? In that case, using AI as a kind of centralized source of truth might actually have real, measurable benefits.
But in other fields, like history, that becomes more problematic. History is an interpretive discipline. It's not just about knowing facts; it's about how we interpret them. And so far, large language models haven’t proven particularly capable when it comes to nuanced, interpretive historical analysis.
So the jury is still out. AI may significantly benefit certain professions or communities, and those benefits may trickle down to society more broadly. If, for instance, AI helps find new treatments for cancer or Alzheimer’s, then we might all benefit. In such areas, we might come to revere AI as a kind of oracle. But in other areas, AI may remain unreliable, not rigorous or critical enough to be taken seriously, no matter how much the developers want us to.
Can artificial intelligence generate truly new knowledge, or is it just a way of reorganizing and building on what humanity already knows? Could it ever have something like a “spark of genius” – the kind of insight that leads to groundbreaking discoveries, like penicillin? Or is AI simply a helpful tool that supports scientists, doctors and professionals, without actually becoming an inventor in its own right?
Over the past twenty years, we’ve created this enormous corpus of data, and much of it is impenetrable to most people. So we need mechanisms for sorting through that information and making sense of it. AI could be a valuable tool in that regard.
Another thing AI might help us do is provoke new questions that we have never thought of asking. In answering them, we might arrive at unexpected and meaningful discoveries. If AI applications can do that, then they could bring a real net benefit to humanity.
I’ve compared AI to the invention of the optical lens. The optical lens gave us glasses, which help in our everyday lives. But beyond that, lenses led to microscopes, which revealed an entire layer of existence we hadn’t known about – microbes, pathogens, diseases – and raised entirely new scientific questions, including those about vaccination. The same is true for telescopes: they let us look up and see the universe in ways we couldn’t before. That opened up questions we had never thought to ask about space, planets, stars, and life beyond Earth.
I wonder if AI might have a similar impact. In combination with the human mind – with our creativity, brilliance and ingenuity – AI might help us ask new kinds of questions, or see patterns that were invisible to us before. What could come from that? That’s the exciting part.
Public concerns surrounding artificial intelligence relate to things like copyright, authenticity, and whether AI generates truly new knowledge or just reshapes existing ideas. But are there deeper or less visible concerns that people might not be fully aware of yet?
That’s what I’m planning to explore in my next book. I share your intuition: a lot of the current conversation – even the discussions that include humanists, historians and other scholars – has, at times, remained on the surface. Some of it has started to go deeper, beginning to uncover broader structural issues and systemic inequities that need attention. But I believe there’s another layer, one that we haven’t yet reached, in terms of understanding what this technology is doing to us, and what it might do to us in the future.
Copy-edited by Larissa Babij.