First-Year Students Explore AI Through the Lens of Speculative Fiction—Featuring Visits from Sci-Fi’s Literary Superstars

13 May 2025

Teal collage featuring artwork elements from classic science fiction novels.

Digital collage featuring illustrations from Frankenstein, No Woman Born, and The Lifecycle of Software Objects.

What does it mean to be human in the age of artificial intelligence? As emerging technologies reshape daily life, DH Project Manager Mary Naydan *23 (English) turned to literature for answers. Her first-year seminar, Speculative Fiction: From Pygmalion to ChatGPT (FRS 142), examined how imaginative storytelling in science fiction, fantasy, and horror has long anticipated today’s AI debates.

The course began with classics like Mary Shelley’s Frankenstein (1818), prompting students to consider the ethical responsibilities of creators and ask, “Who should be held liable when a creation causes harm?” Despite its age, Shelley’s novel remains strikingly relevant in discussions of today’s rapidly evolving and largely unregulated AI landscape.

In the same week, students studied the origins of the “Frankenstein Complex,” a term coined by I, Robot author Isaac Asimov to describe fears of AI turning against its creators. They examined how such anxieties shape not only fiction but also real-world discourse, including a 2023 open letter from tech leaders calling for a pause in AI development.

By tracing concerns about artificial intelligence from Ovid’s account of the Pygmalion myth (8 CE) to contemporary works by authors Ted Chiang and Nnedi Okorafor, the course equipped students with the tools to think critically about AI. It empowered them to engage with the technology thoughtfully and be aware of both its possibilities and its ethical implications.

“Today’s students are inundated with AI tools, whether they realize it or not… and it is getting harder and harder to opt out,” says Naydan. Her goal is to help students make informed choices about the role of AI in their lives, as students and beyond.

This spring’s agenda was immersive and certainly one to remember.

Fostering AI literacy through critical engagement

Students built AI literacy through a series of hands-on labs designed to demystify how these systems work and encourage critical reflection. In the first lab, they used Google’s Teachable Machine to train a simple computer vision model to recognize yoga poses, observing firsthand how supervised machine learning relies on data patterns rather than human-like understanding. For example, a model trained only on right-legged tree poses struggled to recognize the same pose with the left leg, revealing the limits of generalization and challenging the tendency to anthropomorphize AI. Subsequent labs explored AI bias and emotional mimicry.

yoga-poses

Students training a model to recognize yoga poses using Teachable Machine. (Photos: Mary Naydan)

Students also honed their prompt engineering skills by tasking ChatGPT with generating a hypothetical tenth story for I, Robot, then critically evaluating the outcome. In another session, they explored Sudowrite—an AI tool designed for creative writers—to examine the potential advantages and limitations of AI-assisted storytelling.

Later in the semester, during a unit on climate, students visited Princeton’s High Performance Computing Research Center (HPCRC), which houses the campus’s AI research infrastructure. The trip offered a concrete sense of AI’s environmental footprint and how Princeton’s graphics processing units (GPUs) compare to the scale and impact of tech giants like Meta, Tesla, and Google. As a LEED-certified facility, HPCRC demonstrates one model of sustainable infrastructure, and it left an impression, not least because students were fascinated by its eco-friendly lawn care: sheep to mow the grass!

“Visiting the HPCRC was a fascinating experience that deepened my understanding of high-performance computing and its role in cutting-edge research,” says Chinmayi Ramasubramanian ’28. “I have often used the Adroit computing cluster in class, and coming in, I was excited to see the actual hardware that powered my computations.”

class-field-trip

Students on a tour of Princeton’s High Performance Computing Research Center. (Photos: Carrie Ruddick)

In preparation, students read Kate Crawford’s Atlas of AI chapter “Earth,” which challenges the illusion of a “green” tech industry. They learned that AI systems rely on massive computational power and energy, often hidden behind the metaphor of “the cloud.” The visit, paired with discussion, helped students connect abstract environmental critiques to the tangible materiality of AI infrastructure and to consider how AI’s growth drives both energy consumption and resource extraction globally.

“Seeing the physical hardware up close and learning about how everything runs made the complexity of these systems feel so much more real,” states Quinn Challenger ’28.

class-trip-hpcrc

Exploring rooms and machines at Princeton’s High Performance Computing Research Center. (Photos: Carrie Ruddick)

Discussing AI with award-winning fiction writers

A highlight of the course was the opportunity for students to engage directly with two of today’s most influential speculative fiction authors, Ted Chiang and Nnedi Okorafor, both of whom visited the class before giving public lectures as part of CDH’s Humanities for AI series. Naydan emphasized the value of these visits: “It is one thing to talk abstractly about the relationship between AI and creative writing; it is another to hear from creatives directly—what they think about AI, how they use it (or choose not to), and why.”

18-3-2025-Ted-Chiang

Ted Chiang signs a student's book during the class. (Photo: Carrie Ruddick)

To prepare for the visits, students read and discussed each author’s work. Chiang’s work (The Lifecycle of Software Objects and nonfiction essays in the New Yorker) prompted conversations about the ethical implications of developing AI within capitalist systems and ChatGPT as an impediment to developing “cognitive fitness” in college. Okorafor’s work (Death of the Author and “Abracadabra”) inspired discussion on AI’s positive potential to support people with disabilities and improve healthcare.

“There was an electric atmosphere to the conversations,” Naydan recalls. “I was so impressed by the students’ willingness to think deeply, take risks, and have fun with ideas.”

18-3-2025-Ted-Chiang-8

Students pose with Ted Chiang in class. (Photo: Carrie Ruddick)

The visits left meaningful impressions on the students.

“Being a student-athlete, [Okarafor's] story about her experience as an elite athlete who turned a career-ending injury into a path toward writing resonated with me,” says Nathan Banos ’28. “It reminded me that it’s not about what happens to you, but how you respond. Their visits to our classroom were the perfect way to immerse ourselves in the world of both speculative fiction and AI, creating a unique learning environment and lasting memory for me.”

Stephanie Ko ’28 enjoyed the thoughtful balance in the course’s exploration of AI’s technological foundations and its human-centeredness, to which the visits added depth. “I was honestly awed to see how Ted Chiang simply sees and fulfills the need to be a well-informed member of the perpetual discussion regarding the future of AI,” she says, “which is a trait that I think many of us will aspire to develop and carry with us through our education at Princeton.” Of Okorafor, she shares, “She was incredibly transparent that her hopes for AI stem from her personal struggles and frustrations, and I think this perspective was a refreshing reminder that the development of AI and the role we allow it to have is a fundamentally human problem.”

nnedi-class-photo

Students pose for a photo with Nnedi Okorafor (at center). (Photo: Carrie Ruddick)

What comes next?

“If my students take just one thing away from my class, I hope it is the idea that AI is not objective, perfect, or neutral,” says Naydan. “It is fallible, because it’s only as good as its training data; it encodes and perpetuates human biases; and it is complicated in how it can help and hurt humanity, often simultaneously.”

She also stated that by the end of the course, there was no single answer to how literature responds to technological change in our society. Instead, students encountered a range of creative approaches: Okorafor imagines the technologies she hopes society will build; Kai-Fu Lee and Chen Qiufan use “scientific realism” to depict core concepts of machine learning; Terence Taylor explores the exploitation of labor in AI-driven workplaces through horror; and Philip K. Dick uses anthropomorphized AI to probe deeper moral and existential questions about what it means to be human.

Ultimately, the course underscored that literature has long been a tool for making sense of the world. As AI continues to reshape society, storytelling not only helps us understand emerging realities—it also offers a means to imagine and influence what comes next.

CDH offers many unique courses for undergrads. View past and future courses here.

Be sure to check out Dr. Naydan's next available course, Project Management 101back by popular demand–at the next Wintersession on January 20, 2026. Browse last year’s slide deck here. Don’t hesitate to sign up when registration opens in December–there was a waitlist last year!

Related posts

Ted Chiang explores “incompatibilities” between generative AI and art

On March 18, the multi-Hugo-award-winning science fiction author and 2023 TIME100 Most Influential Person in AI lectured at Princeton University, laying out what he described as...

Ted Chiang-5
Course
tgn324