From Prompts to Performance: Building Trustworthy AI That Acts
AI’s next frontier isn’t about generating better text — it’s about generating better outcomes.In the ICU, a patient’s condition can change in...
At a moment when headlines question the value of liberal arts degrees, Kenyon College is running a bold counter-experiment: putting AI research directly in the hands of humanities and social-science students. Through the Kenyon AI Lab and a decade-long, human-centered AI curriculum, students aren’t just “AI literate”—they’re using modern tools and methods to ask big questions about politics, culture, ethics, religion, health, and work, then turning those questions into real research, collaborations, and portfolios employers can actually see.
Key takeaways:
Kenyon has built a human-centered AI curriculum over ten years, rooted in classic liberal-arts questions about the good life, power, and human flourishing.
The Kenyon AI Lab gives humanities and social-science students something they rarely get: real, collaborative lab experience and global research partnerships.
Students use AI to test theories at scale—analyzing tweets, contracts, sermons, court decisions, and more—after only a few core technical courses.
The program blends conceptual depth with practical skills, preparing students to design, build, and critique AI systems across domains like education, health, religion, and politics.
Employers don’t just see grades; they see portfolios of original, AI-driven projects that cross disciplines and show initiative, forecasting, and project execution.
Kenyon’s model suggests that the strongest AI leaders of the future may emerge from liberal-arts programs that pair many-model thinking with hands-on technical work.
An Interview with Professor Kate Elkins
In this Age of AI, the headlines are filled with stories about employers hiring fewer college graduates, especially those from liberal arts programs. It was striking, then, to discover an ambitious initiative at Kenyon College led by Professor Kate Elkins. Below is an interview conducted by email about this amazing program.
The AI Lab is where Kenyon students work on collaborative research projects led by faculty, often in partnership with researchers at institutions around the world. This differs from our coursework, where students pursue original project-based research, but both are essential parts of the program.

We created the lab because humanities and social science students rarely have opportunities to work on real collaborative projects in college. In STEM fields, undergraduates routinely join labs or secure co-op work experiences. For many traditional liberal arts majors, there are fewer opportunities for this kind of work.
Currently our lab is testing theory of mind in AI systems, exploring capabilities surrounding AI hyper-persuasion, and developing ways to build AI into archival preservation and analysis. Like academia, much industry research remains siloed. Our lab is designed to facilitate cross-disciplinary projects.
This spring marks the tenth anniversary of building what we believe is the world’s first human-centered AI curriculum. What makes us distinctive is not only how long we’ve been doing this but how we build on Kenyon’s oldest interdisciplinary traditions of asking fundamental questions about human flourishing, the good life and ideal communities. We’ve added coursework that develops strong technological expertise so students can address today’s urgent debates around AI, but our intellectual foundation shapes everything that follows.
One important shift is that the humanities and social sciences have traditionally been theory-driven, and in the past we didn’t always have ways to test those frameworks empirically at scale. Now, with the help of AI, students can analyze 316,000 tweets to test theories about political discourse, examine whether Instagram’s evolving terms of service confirm ideas about surveillance capitalism, or investigate whether Supreme Court decisions track with public sentiment. They can now do this without years of coding training. By leveraging AI, modern tooling and intuitive Python libraries, students are producing original research after just a few core courses.
We also train students on both conceptual frameworks and real-world practice. How do we build systems that serve human needs, whether for education advocacy, athletic coaching or financial analysis? It both creativity and technical know-how. Our student research has been downloaded more than 90,000 times by institutions around the world.
Our collaboration model is also unusual, spanning industry, academia, government and nonprofits. My colleague Jon Chun and I serve as principal investigators for the U.S. Artificial Intelligence Consortium. I recently spoke at a UNESCO webinar and have worked with international nonprofits such as Public AI. We have also partnered with a Meta research collaborative, received funding from the IBM and Notre Dame Tech Ethics Lab and shared our approach at OpenAI’s Higher Ed Forum. I try to model for students what it means to have a seat at the table.
Absolutely, though many institutions remain focused on what we call “AI literacy.” That is a shifting target, and it often fails to capture what is most exciting about AI. Our students are asking consequential questions. Can chatbots help parents navigate IEP meetings? What narrative structures appear in self-help literature across eras? How can blockchain digital IDs support unhoused individuals? Students are using AI to investigate issues they care deeply about, including education advocacy, religious studies, healthcare, athletics and political discourse.
As AI becomes more influential, the important questions are shifting from how to implement something technically to how to implement it in ways that maximize human flourishing. Consider challenges surrounding the automation of knowledge work. These questions can only be answered by understanding economic and psychological effects along with public policy and regulatory dynamics. A liberal arts tradition that emphasizes connections across fields offers the right foundation to develop thoughtful solutions to AI’s impacts.
These are quintessential liberal arts questions. Our curriculum moves from human-centered computing through machine learning and AI to frontier topics, culminating in original research capstones where students ask: What are the questions that matter, and what methods can I use to answer them?

Students love it, which often surprises colleagues who assume AI and coding courses must be boring or purely vocational. In fact, our courses are often the only places where students feel encouraged to pursue their own questions in meaningful ways, whether they are analyzing thousands of YouTube sermons to understand Christianity in the Ohio heartland or using AI to predict opioid overdoses.
In the lab, students learn how to contribute to projects larger than themselves. In their individual project-based courses, they get to ask the questions and answer them. Designing these projects can be daunting because there are no guardrails or instruction manuals. But once they dive in, they thrive.
Our biggest challenge is scale. We want to maintain small class sizes and hands-on project mentoring, which means turning away students. When we briefly removed enrollment caps several years ago, the course became one of the largest on campus. We have since returned to more sustainable class sizes.
If there is one thing I hope students take with them, it is a many-model approach, which is central to the liberal arts. Some disciplines teach a narrow set of models, but if all you have is a hammer, everything looks like a nail. Our students are exposed early and often to multiple ways of thinking and a wide array of models, with the understanding that all models are imperfect but some are useful. Selecting the right model and applying it to a new domain is crucial, whether it’s analyzing character networks in Little Women or the emotional arc of successful Shark Tank pitches. This approach helps students see the world in all its complexity and from multiple angles.
We often hear that employers want liberal arts majors. What employers actually want are the strongest qualities that liberal arts training can develop: creativity, ingenuity, problem-solving and the ability to design and implement high-impact real-world projects. They also value forecasting, which is often easier to measure than “critical thinking.” It means learning enough code to communicate with engineers as a project manager or understanding AI well enough to articulate the challenges in regulating a general-purpose technology.
Employers are not going to ask to read a class paper or exam. They will, however, be eager to see a portfolio. A student might say, “I used AI to map best-selling end-of-life narratives onto the Kübler-Ross stages of grief to become a better physician,” or “I created an AI system for swim-technique evaluation because I’m passionate about performance and coaching.” Our students learn to work across traditional boundaries while using AI carefully and thoughtfully. This is far more valuable than basic AI literacy.
Our most successful projects come from genuine curiosity, whether the topic is analyzing TikTok as a new oral tradition or testing AI detectors. What can facial-emotion analysis reveal about decades of Playboy covers? What do human-AI conversations look like in the wild? This combination of conceptual sophistication and quantitative capability is exactly what employers need and what students need to make an impact in the world they are entering.
Onward,
Paul
.
AI’s next frontier isn’t about generating better text — it’s about generating better outcomes.In the ICU, a patient’s condition can change in...
An internal AI memo from Shopify CEO Tobias Lütke leaked and quickly stood out as one of the clearest examples of how a CEO can reset expectations...
PwC is likely to hire fewer college graduates over the next few years because AI tools let smaller teams produce the same (or more) client output....
Trusted by companies and vendors around the globe - we help you cut through the noise and stay informed so you can unlock the transformative power of GenAI .
Join us at this year's Generative AI World! Hear from enterprise AI leaders who are achieving meaningful ROI with their GenAI initiatives and connect in-person with the GAI Insights members community including C-suite executives, enterprise AI leaders, investors, and startup founders around the world