Human and Machine Intelligence in Networks of Early Modern Print: Q&A with John Ladd

Humanities for AI

8 December 2025

John-Ladd-MC-1-03

John Ladd presents the first Modeling Culture talk in September. (Photo: Carrie Ruddick).

The CDH’s Humanities for AI initiative, launched in fall 2024, has presented a range of events, projects, and conversations exploring how humanistic values and approaches are crucial to the development, use, and interpretation of the field of AI, including this year’s Modeling Culture program.

Continuing our Q&A series where we share perspectives on the impact of AI on humanities scholarship, we welcomed John Ladd (Assistant Professor, Department of Computing and Information Studies, Washington & Jefferson College) to respond to some questions after his talk in September. In “Human and Machine Intelligence in Networks of Early Modern Print,” he investigated how artificial intelligence and other computational approaches can help us to understand the distant past.

Your work bridges early modern literature and computational methods. How does your research and teaching inform your understanding of “Humanities for AI”?

In my research, I frequently apply computational methods and digital tools to early modern book history and literature. I teach in an interdisciplinary computing program where I show students how to apply humanities methods and objects of study to data science and the history of technology. It’s this back-and-forth exchange, of using technology in the humanities and using the humanities to understand technology, that the digital humanities has long stood for and that can help us frame the humanities’ relationship to AI. Humanities scholars continue to demonstrate the value of interrogating AI ethically, critically, and in historical context, and I believe that we’re starting to see the ways large language models might be used, with sensible guardrails, as research tools as well as research subjects.

30825679554

Impressio Librorum / Book Printing, 16th-century engraving by Theodoor Galle after a drawing by Johannes Stradanus, c. 1550

In your talk, you noted that human-machine interaction has been happening since letterpress printing itself, using an engraving of an early modern print house to illustrate “the merger of different kinds of expertise” and drawing attention to the human labor behind seemingly “magical” new technologies. How does thinking about this historical precedent shape your approach to contemporary AI tools in humanities research?

I tend to focus on continuities between contemporary AI/large language models and a long history of technological change. The idea that AI is a revolutionary break from all previous technologies is a narrative that serves particular interests. I prefer narratives that put AI and LLMs into social and cultural contexts, like the recent frameworks of “AI as cultural and social technologies” and especially “AI as Normal Technology.” For humanities researchers and everyday users, it’s advantageous to think of AI as part of a long history of text technologies, going back to the printing press and before. This is why I emphasize LLMs’ ties both to decades of work in the digital humanities and to a broader view of the history of technology. This view helps us to see the many layers of human labor that have gone into AI, and the ways AI is still reliant on human expertise, just as other similar text technologies have been.

You’ve worked extensively on building research tools for humanists, from Network Navigator to the EarlyPrint Bibliographia. Now that LLMs have entered the picture, how has the landscape of digital humanities “tool building” changed? What ways have you found to engage with LLMs in your development work, with or beyond the chatbot?

An LLM itself is a new kind of tool, but it’s also a way to make tool-building more accessible and inviting. AI coding assistants can make it less intimidating to build a custom tool or website for a personal research project. There’s still a lot of value, and necessity, in learning how code works and how to make code work for you, but I am hopeful that LLMs will lower the barrier to entry and make the process more inviting. The novelist and experimental coder Robin Sloan has used the phrase “an app can be a home-cooked meal” to describe the empowering process of taking toolmaking into your own hands and building something just for yourself or for a small group. Instead of large-scale apps, these bespoke tools are so often the kinds that researchers really need, and LLMs may open the door for more such tools to be built.

John-Ladd-MC-1-10

John Ladd @ CDH, September 2025. Photo: Carrie Ruddick

Your work on local LLMs (with Melanie Walsh, et. al.) emphasizes privacy and sustainability for humanities AI research. Can you talk about why running models locally matters for humanistic inquiry? What are the technical and ethical considerations that led you to focus on this approach?

It’s essential for humanities researchers to find ethical ways of working with LLMs that respond to the many valid critiques of this technology. Working with LLMs locally reduces their environmental impact. Instead of processing your query at a massive data center, your own regular computer hardware can run the task, saving water and energy. The prompt and data also never leave your computer, making the whole interaction more private and avoiding corporate interests. While not every humanities research task can be run this way, many of the datasets and research questions that humanists use are at a scale that can be processed by a local LLM. The local models give you more control over the entire process, and they make the task more replicable in order for the work to be verified and reviewed. Both technically and ethically, I think local LLMs are a great path forward for many folks who want to work with this technology, and I’m working to make sure more people know this is an option!

In the case study you presented, examining whether early modern printers produced books on the same subjects over their careers, you combined text classification, network analysis, and data visualization. What did machine learning reveal that traditional bibliographic methods might have missed—and vice versa? How does a “mesoscopic” approach help you navigate moments where computational and analog approaches yield different insights?

Many early modern book historians have conducted studies of particular printers or groups of printers and publishers, examining their output to determine continuities in subject or genre. In my talk, I used the example of publisher Humphrey Moseley’s reputation as one of the preeminent publishers of literary and poetic works. This kind of close analysis is ideal for traditional bibliographic methods, but bibliography is also interested in the large-scale question of whether publishers like Moseley (and the printers who worked with him) are outliers or are part of a larger pattern. The machine learning methods I used are very good at finding patterns over tens of thousands of texts, which would be difficult or impossible to do by hand, and it’s how I was able to establish that printers do tend to have consistency in their outputs over time. As Chris Warren and Martin Mueller have each argued, what computational methods can do is let us connect the general pattern to the particular case, in this instance showing that the observed patterns for specific stationers carry through to larger trends in Restoration printing.

What is your greatest concern and biggest hope for the future of AI in humanities scholarship?

The biggest concern for me is the corporate logic that underlies the ways AI is being sold to and adopted by the general public, and this logic drives many of AI’s most troubling qualities: environmental problems, intellectual property problems, and labor problems. This is the ideology that attempts to set up AI as in opposition to or as a replacement for the arts and humanities. We should resist easy narratives that conclude that AI should write for you, make art for you, or do your job for you. Many humanities scholars have already begun the important work of pushing back on these narratives and making it clear that AI doesn’t stand apart from other technologies and shouldn’t be exempted from ethical and legal critique.

But by properly contextualizing AI within the history of text technologies, one hope I have is that more light can be shed on the amazing work being done with natural language processing and text analysis in the humanities. AI has made more people aware of text analysis and machine learning, to a degree I never would have thought possible a few years ago. Digital Humanities scholars who’ve been doing this work for years have a chance to share their expertise with a wider audience and help to craft new narratives around large language models that might move us past the current era of corporate chatbots. My colleagues in the Modeling Culture seminar are producing some incredible scholarship that merges LLMs and the humanities, and my hope is that more people will seek out and learn from this work.

About John Ladd

John Ladd is an assistant professor in the Department of Computing and Information Studies at Washington & Jefferson College. He teaches and researches on the use of data across a wide variety of domains, especially in cultural and humanities contexts, as well as on the histories of information and technology. Building on an English literature background where he studied the intersection between computational methods and early modern print culture, his work includes large-scale digital humanities projects, such as Six Degrees of Francis Bacon and Early Print. He has published essays and web projects on humanities data science and cultural analytics, computational bibliography, the history of data, and network analysis.

Related posts

Modeling Culture Talk
modeling-culture-john-ladd copy

Modeling Culture: New Humanities Practices in the Age of AI

A year-long seminar for faculty and grads with a public lecture series, culminating in a comprehensive and accessible curriculum for advanced humanities researchers.

modeling culture plain copy

Humanities for AI

Foregrounding the centrality of the humanities in the development, use, and interpretation of the field broadly known as “AI”

HUM-AI-LOGO_1_L343as1.original copy

More Humanities for AI Q&As

Humanities for AI
NnediOkorafor_-19
Humanities for AI
ted-chiang-qa-banner

AI and Ways of Seeing: Q&A with Lauren Tilton

12 November 2024

Lauren Tilton, Carrie Ruddick, Natalia Ermolaev

Humanities for AI
distant viewing copy