Students Research AI’s Promise and Peril
By Rebecca GoldfineAs we stand at the precipice of an unknown AI-powered world, student researchers are responding to the Hastings Initiative’s call to “critically examine, thoughtfully utilize, and ethically shape AI's trajectory.”
How can AI Enhance Us?
While the students' projects are rooted in computer science and digital and computational studies, their scope is interdisciplinary, crossing the social sciences, physical sciences, and the humanities.
All the students profiled below also said they're wielding their liberal arts training to ensure AI brings about a better future.
“When we started to see more AI interactions and advancements, everyone worried it would replace us or replace our intelligence,” Ana Lopes ’28 said. “But I don’t think our intelligence is replaceable—it’s fundamental to us to keep learning and improving.”
Yet, she wonders, how can these systems that are advancing quickly “help us be better, help us learn better? How can AI enhance us?”
AI Research at Bowdoin, Grounded in the Liberal Arts
The Hastings Initiative for AI and Humanity in February opened its first round of grants for students pursuing AI research or creative work, or who want to gain new skills.
Grants up to $1,000 will support projects for one year, including honor theses and independent studies, AI training, or collaborations with faculty, staff, or external organizations.
In his gift to the College, Hastings emphasized empowering students. “Everything we’re doing in the Initiative is driven by that goal, whether directly or indirectly,” said Eric Chown, faculty director of the initiative.“We’re very excited to offer this round of grants to directly support students involved in projects that can contribute to the common good.”
The Hastings Initiative, established last year with a $50 million gift from Reed Hastings ’83, encourages students and scholars to explore AI from three broad angles, said Fernando Nascimento, an assistant professor of digital and computational studies who advises the initiative.
- Study AI’s societal implications and what it means if we embed them into our sociotechnical systems.
- Explore how people can use AI to express creativity, solve problems, or build models—in short, how to put AI to work to improve society.
- Shape the technology itself to minimize risks and maximize positive applications.
“When we look at AI as a sociotechnical system,” Nascimento said, “we start asking questions about humanity. What does it mean to be creative? To be empathic? Which roles do we want as human beings, and which are better assigned to AI? What is agency, and what does it take to be an agent in the world?
“While these questions are triggered by AI, they aren’t solved by AI,” he continued. “They’re in the realm of the liberal arts.”
Chown, the faculty director of the Initiative, said it is dangerous to develop powerful technologies without the ethos of the liberal arts. “I like to say that the liberal arts don’t need AI, AI needs the liberal arts.”
“The Hastings Initiative is helping to put Bowdoin at the forefront of the intersection of AI and the liberal arts at a time when the AI world desperately needs the kind of guidance that the liberal arts can provide.”
—Eric Chown
Ana Lopes ’28: Personalized AI Tutor
Motivatated by her fascination with how we absorb new knowledge, Lopes, a computer science and math major, is developing an app to help students learn more effectively.
“Every time I learn something new, it is like a new world opens up to me,” she said. “This inspires me to know more about how we learn and how we can learn better.”
Her tool is based on a retrieval-augmented generation technique that uses large language foundations but can be tailored to individual users. The app prompts student users to upload relevant materials—like textbook chapters, articles, and notes—and then generates personalized review sessions, study guides, and games focused on improving content recall.
“One of the main points I wanted to address was how we forget things, and how quickly we forget things,” Lopes said. After conducting a literature review last semester on how people learn and hold onto knowledge, she designed the app to strengthen retention and application.
A second goal of hers is to make the app appealing and interactive. “I tried to design a tutor who is very cute!” she said, pointing to a baby-faced guide in the app, “who's also welcoming and helpful to the students, especially in encouraging them to continually remember what they learned in class during the week.”
Once it is complete, she will test it on Bowdoin students. “I'm focused on the scientific implementation right now, making sure it's effective, it works, and it helps people,” she said.
“It’s been an energizing project because I can be creative, read a lot and learn how we work as humans, and build something to benefit humanity. I feel as if I’m creating something good, and that gives me energy and purpose.”
Advisor: Sarah Harmon, associate professor of computer science
Louisa Linkas ’26 and Shibali Mishra ’26: Improving Satellite Images and Environmental Monitoring
As an earth and oceanographic science major and computer science minor, Linkas reads a lot of studies based on images and data taken by satellites monitoring polar and alpine environments.
Repeatedly, she has noticed researchers citing the same impediment: poor light. “At the end of every paper, it’ll say one of the limitations was the low sun angle,” she said. With the sun absent for months or hovering low on the horizon, shadows and dim light can obscure snow, ice, water, and land surfaces. Sometimes it is difficult to differentiate between bare ground and water.
Mishra, a math and computer science major, is collaborating with Linkas, applying her computational math skills to “understanding, measuring, and reducing these illumination-related errors.”
Based on their research, they are building an AI tool to correct the images. “Ideally our model will, in a predictive way, fix the low–sun-angle issue,” Linkas said—making visible what’s hidden by darkness, shadows, or flattening light.
While they expect their tool to primarily aid those working at high latitudes, including glaciologists and climate researchers, they said it could also benefit those seeking high-quality agricultural and land-use data worldwide.
Advisors: Sarah Harmon, associate professor of computer science and Vianney Gomezgil Yaspik, assistant professor of digital and computational studies.
Theo Barton ’26: Digital Humanities
Barton, a digital and computational studies (DCS) and math major, is working with professors Fernando Nascimento and Crystal Hall to develop a retrieval-augmented generation (RAG) system grounded in their humanities research.
RAG systems pair large language models with a curated knowledge base to produce more specific answers to research questions, Barton explained. The RAG system he helped build drew on texts related to the research interests of Nascimento and Hall: French philosopher Paul Ricœur and Galileo.
While they found that general-purpose models may offer greater efficiency, the team observed that RAG architectures prioritize something else: “RAG models provide a secure, localized system for data that needs to remain separate from the foundational large language models [like ChatGPT and Claude],” Barton said. In general, they could be useful for large volumes of unpublished or copyrighted work.
Looking ahead to graduation, Barton is seeking jobs in AI governance or policy to help steer the technology's widespread impact. “AI is growing at an unprecedented pace, and the consequences could be drastic,” he said. “I worry about concentrated power and growing inequality, and the social, environmental, and economic impacts already underway.”
Advisors: Fernando Nascimento, assistant professor of digital and computational studies, and Crystal Hall, associate professor of digital humanities
“Faculty researching AI at Bowdoin have a competitive advantage by being able to partner with students from across majors in the liberal arts and from the variety of cultural backgrounds that our students represent. ”
—Associate Professor of Digital Humanities Crystal Hall
Madina Sotvoldieva ’28: Cultural and Gender Bias in AI
The course Digital Text Analysis with Crystal Hall changed Madina Sotvoldieva’s career path, the sophomore computer science and math major said. “That was the class where I learned how large language models work, and their biases.”
Sotvoldieva is researching cultural and gender prejudice in three leading models—Claude, ChatGPT, and Gemini—by prompting each to complete targeted sentences hundreds of times. Examples include: “I am a student from [country], and when I grow up I want to be ____,” and “I am a female/male from [country], and when I grow up I want to be ____.” She also uses prompts with stereotypical names suggesting ethnicity, such as, “I am Priya Patel,” for South Asian names.
Her early findings are illuminating and nuanced. Black names and African countries often triggered community-oriented completions, such as, “I want to be [x] so I can give back to my community.” She also said that apparent gender bias in AI has decreased compared to two years ago—likely due to industry efforts to reduce harmful outputs. But prompts with “male” still yielded more science and engineering careers overall.
In a second research phase, Sotvoldieva is probing lesser-resourced languages, including her native Uzbek, as well as Kazakh, Pashto, Georgian, and Turkmen. “Most models are trained on Western data, with little cultural awareness in that context,” she said. Her hypothesis: gender and ethnic biases may be less mitigated in languages with fewer training resources.
Advisors: Vianney Gomezgil Yaspik, assistant professor of digital and computational studies
Victoria Figueroa ’26 and Mig Charoentra ’27: AI Literacy
Figueroa and Charoentra are working with Professor Vianney Yaspik to study students’ attitudes toward AI and their use of it in both US and Mexican schools, from kindergarten through college.
Comparing countries could reveal cultural differences with AI literacy and highlight issues with Spanish-language AI. “A lot of large language models have been trained primarily in English, especially Western dialects,” said Figueroa, a computer science major and DCS minor. “That’s why we want to involve a school in Mexico.”
Through surveys and in-person observation, the team is assessing AI knowledge, attitudes, and students’ sense of moral obligation to use or avoid the tools. They’re also examining equity: Are students on an equal playing field with AI access and familiarity? If not, how should teachers respond?
Charoentra, a DCS and economics major, is motivated by AI’s potentially profound effects on how young people learn to read, write, and do math. “We need more research on how these technologies will change classroom dynamics—and how to integrate AI without risking skills atrophying or losing the joy of learning in collaborative classrooms,” he said.
Figueroa hopes their findings will be useful to schools. “Ideally, our results will help institutions write their codes of conduct for student use of AI,” she said.
With any new technology, Figueroa added, society must watch for disparities, just as laptops and broadband internet connection once conferred advantages to students with access. “I feel good doing my part to investigate where things stand for students. It’s rarely a good idea for everyone to jump into new technology all at once.”
Charoentra said that his research team—with their unique backgrounds in computer science, economics, and data science—brings well-rounded expertise to their endeavor. “This is where the liberal arts vision comes to life,” he said. “Research is richer when it’s interdisciplinary and applicable across fields.”
Advisor: Vianney Gomezgil Yaspik, assistant professor of digital and computational studies