How Kenyon College Is Rewriting the Rules for AI in the Humanities

GAI Insights Team :

Why Kenyon’s AI Lab Matters for the Humanities

At a moment when headlines question the value of liberal arts degrees, Kenyon College is running a bold counter-experiment: putting AI research directly in the hands of humanities and social-science students. Through the Kenyon AI Lab and a decade-long, human-centered AI curriculum, students aren’t just “AI literate”—they’re using modern tools and methods to ask big questions about politics, culture, ethics, religion, health, and work, then turning those questions into real research, collaborations, and portfolios employers can actually see.

Key takeaways:

  • Kenyon has built a human-centered AI curriculum over ten years, rooted in classic liberal-arts questions about the good life, power, and human flourishing.

  • The Kenyon AI Lab gives humanities and social-science students something they rarely get: real, collaborative lab experience and global research partnerships.

  • Students use AI to test theories at scale—analyzing tweets, contracts, sermons, court decisions, and more—after only a few core technical courses.

  • The program blends conceptual depth with practical skills, preparing students to design, build, and critique AI systems across domains like education, health, religion, and politics.

  • Employers don’t just see grades; they see portfolios of original, AI-driven projects that cross disciplines and show initiative, forecasting, and project execution.

  • Kenyon’s model suggests that the strongest AI leaders of the future may emerge from liberal-arts programs that pair many-model thinking with hands-on technical work.

AI Innovation At A Leading Liberal Arts College

An Interview with Professor Kate Elkins

In this Age of AI, the headlines are filled with stories about employers hiring fewer college graduates, especially those from liberal arts programs. It was striking, then, to discover an ambitious initiative at Kenyon College led by Professor Kate Elkins. Below is an interview conducted by email about this amazing program.

What exactly is the Kenyon AI Lab, and why would a liberal arts college have one?

The AI Lab is where Kenyon students work on collaborative research projects led by faculty, often in partnership with researchers at institutions around the world. This differs from our coursework, where students pursue original project-based research, but both are essential parts of the program.

Gemini_Generated_Image_idge2eidge2eidge

We created the lab because humanities and social science students rarely have opportunities to work on real collaborative projects in college. In STEM fields, undergraduates routinely join labs or secure co-op work experiences. For many traditional liberal arts majors, there are fewer opportunities for this kind of work.

Currently our lab is testing theory of mind in AI systems, exploring capabilities surrounding AI hyper-persuasion, and developing ways to build AI into archival preservation and analysis. Like academia, much industry research remains siloed. Our lab is designed to facilitate cross-disciplinary projects.

How unique is this program?

This spring marks the tenth anniversary of building what we believe is the world’s first human-centered AI curriculum. What makes us distinctive is not only how long we’ve been doing this but how we build on Kenyon’s oldest interdisciplinary traditions of asking fundamental questions about human flourishing, the good life and ideal communities. We’ve added coursework that develops strong technological expertise so students can address today’s urgent debates around AI, but our intellectual foundation shapes everything that follows.

One important shift is that the humanities and social sciences have traditionally been theory-driven, and in the past we didn’t always have ways to test those frameworks empirically at scale. Now, with the help of AI, students can analyze 316,000 tweets to test theories about political discourse, examine whether Instagram’s evolving terms of service confirm ideas about surveillance capitalism, or investigate whether Supreme Court decisions track with public sentiment. They can now do this without years of coding training. By leveraging AI, modern tooling and intuitive Python libraries, students are producing original research after just a few core courses.

We also train students on both conceptual frameworks and real-world practice. How do we build systems that serve human needs, whether for education advocacy, athletic coaching or financial analysis? It both creativity and technical know-how. Our student research has been downloaded more than 90,000 times by institutions around the world.

Our collaboration model is also unusual, spanning industry, academia, government and nonprofits. My colleague Jon Chun and I serve as principal investigators for the U.S. Artificial Intelligence Consortium. I recently spoke at a UNESCO webinar and have worked with international nonprofits such as Public AI. We have also partnered with a Meta research collaborative, received funding from the IBM and Notre Dame Tech Ethics Lab and shared our approach at OpenAI’s Higher Ed Forum. I try to model for students what it means to have a seat at the table.

Should liberal arts colleges like Kenyon teach AI?

Absolutely, though many institutions remain focused on what we call “AI literacy.” That is a shifting target, and it often fails to capture what is most exciting about AI. Our students are asking consequential questions. Can chatbots help parents navigate IEP meetings? What narrative structures appear in self-help literature across eras? How can blockchain digital IDs support unhoused individuals? Students are using AI to investigate issues they care deeply about, including education advocacy, religious studies, healthcare, athletics and political discourse.

As AI becomes more influential, the important questions are shifting from how to implement something technically to how to implement it in ways that maximize human flourishing. Consider challenges surrounding the automation of knowledge work. These questions can only be answered by understanding economic and psychological effects along with public policy and regulatory dynamics. A liberal arts tradition that emphasizes connections across fields offers the right foundation to develop thoughtful solutions to AI’s impacts.

These are quintessential liberal arts questions. Our curriculum moves from human-centered computing through machine learning and AI to frontier topics, culminating in original research capstones where students ask: What are the questions that matter, and what methods can I use to answer them?

Gemini_Generated_Image_2x400c2x400c2x40

How have students responded?

Students love it, which often surprises colleagues who assume AI and coding courses must be boring or purely vocational. In fact, our courses are often the only places where students feel encouraged to pursue their own questions in meaningful ways, whether they are analyzing thousands of YouTube sermons to understand Christianity in the Ohio heartland or using AI to predict opioid overdoses.

In the lab, students learn how to contribute to projects larger than themselves. In their individual project-based courses, they get to ask the questions and answer them. Designing these projects can be daunting because there are no guardrails or instruction manuals. But once they dive in, they thrive.

Our biggest challenge is scale. We want to maintain small class sizes and hands-on project mentoring, which means turning away students. When we briefly removed enrollment caps several years ago, the course became one of the largest on campus. We have since returned to more sustainable class sizes.

Many students and parents worry about employability in the Age of AI. What advice do you have for liberal arts students?

If there is one thing I hope students take with them, it is a many-model approach, which is central to the liberal arts. Some disciplines teach a narrow set of models, but if all you have is a hammer, everything looks like a nail. Our students are exposed early and often to multiple ways of thinking and a wide array of models, with the understanding that all models are imperfect but some are useful. Selecting the right model and applying it to a new domain is crucial, whether it’s analyzing character networks in Little Women or the emotional arc of successful Shark Tank pitches. This approach helps students see the world in all its complexity and from multiple angles.

We often hear that employers want liberal arts majors. What employers actually want are the strongest qualities that liberal arts training can develop: creativity, ingenuity, problem-solving and the ability to design and implement high-impact real-world projects. They also value forecasting, which is often easier to measure than “critical thinking.” It means learning enough code to communicate with engineers as a project manager or understanding AI well enough to articulate the challenges in regulating a general-purpose technology.

Employers are not going to ask to read a class paper or exam. They will, however, be eager to see a portfolio. A student might say, “I used AI to map best-selling end-of-life narratives onto the Kübler-Ross stages of grief to become a better physician,” or “I created an AI system for swim-technique evaluation because I’m passionate about performance and coaching.” Our students learn to work across traditional boundaries while using AI carefully and thoughtfully. This is far more valuable than basic AI literacy.

Our most successful projects come from genuine curiosity, whether the topic is analyzing TikTok as a new oral tradition or testing AI detectors. What can facial-emotion analysis reveal about decades of Playboy covers? What do human-AI conversations look like in the wild? This combination of conceptual sophistication and quantitative capability is exactly what employers need and what students need to make an impact in the world they are entering.

FAQs: AI and the Humanities at Kenyon College

  1. What is the Kenyon AI Lab, in simple terms?

    The Kenyon AI Lab is a collaborative research space where students work with faculty (and external partners) on real AI projects, often across disciplines. Unlike a single class, the lab runs longer-term studies—on topics like theory of mind in AI, hyper-persuasion, or archival analysis—so students learn how to contribute to something bigger than an individual assignment.
  2. Why does a liberal arts college need an AI lab at all?

    Because AI is reshaping every field the liberal arts care about: politics, religion, health, education, culture, and work. The lab lets students bring humanistic questions—about meaning, power, ethics, narrative—into direct contact with modern AI tools. It’s a way to ensure that the people shaping and critiquing AI are trained to think across disciplines, not just to optimize code.
  3. How is Kenyon’s approach different from basic “AI literacy” courses?

    Many institutions stop at AI literacy—how to prompt a chatbot or use off-the-shelf tools. Kenyon goes further: students learn conceptual frameworks and hands-on methods, then apply them to original research. Instead of “using ChatGPT for homework,” they’re building projects that analyze culture, policy, and behavior at scale, often with Python, modern libraries, and real datasets.
  4. What kinds of projects are students actually doing?

    Projects range widely: mapping political discourse on social media, analyzing Supreme Court opinions, using AI to support IEP advocacy for parents, predicting opioid overdoses, examining TikTok as a new oral tradition, or building AI tools for athletic coaching and performance. The common thread is that students start from genuine curiosity and use AI as a way to answer questions that matter to them.
  5. How does this help with jobs and careers in the Age of AI?

    Students graduate with portfolios, not just transcripts—concrete examples of projects where they framed a problem, chose methods, worked with AI tools, wrangled data, and delivered results. Employers see evidence of creativity, forecasting, cross-functional collaboration, and the ability to communicate with both technical and non-technical stakeholders, all of which map directly onto high-value roles.
  6. Can other liberal arts colleges replicate this model?

    Yes, but it requires more than adding a single “Intro to AI” class. Kenyon’s model depends on three things:
    1. a human-centered conceptual spine
    2. enough technical training for students to do real work
    3. structures—labs, capstones, partnerships—that let students build and ship projects.
         Any liberal arts college willing to invest in those pillars can adapt this approach to its own mission and strengths.
 

Onward,
Paul

.

From Prompts to Performance: Building Trustworthy AI That Acts

From Prompts to Performance: Building Trustworthy AI That Acts

AI’s next frontier isn’t about generating better text — it’s about generating better outcomes.In the ICU, a patient’s condition can change in...

The CEO of Shopify Has a GREAT Memo to Employees about AI

The CEO of Shopify Has a GREAT Memo to Employees about AI

An internal AI memo from Shopify CEO Tobias Lütke leaked and quickly stood out as one of the clearest examples of how a CEO can reset expectations...

PwC To Hire 1/3 Less College Grads The Next Few Years

PwC To Hire 1/3 Less College Grads The Next Few Years

PwC is likely to hire fewer college graduates over the next few years because AI tools let smaller teams produce the same (or more) client output....