When Nathan Balderas ’27 and Megan Thomas ’27 met through a Silicon Valley Social Impact Fellowship, Thomas wasn’t using AI that often. “AI hasn’t been adopted as quickly in the health sciences industry,” says Thomas, a student on the pre-med track. “Health care deals with real people, so the risks are high.”
At the same time, she recognized that generative tools were becoming impossible to ignore.
“We’re not in an if-AI world,” she says. “We’re in a when-AI world.”
Since then, Thomas and Balderas have become active voices in conversations surrounding AI use at Lehigh, presenting together at the 2026 Inclusive Excellence in Teaching Workshop and later speaking candidly with members of the university’s board of trustees about how students are using AI in academic settings.
Though they come from very different disciplines, Thomas in molecular biology and Balderas in industrial and systems engineering and finance, their perspectives often intersect around one central idea: AI works best as a support system, not a substitute for learning.
As AI evolves, so do campus conversations surrounding academic integrity and responsible use. In late April, the university provided updated AI guiding principles, adjusting to campus critiques and suggestions from students, faculty and staff. One core revision states that “AI may assist in human work; the effort and ownership that define genuine learning must remain with the person.” This sentiment ties directly into intellectual ownership, a concept Thomas and Balderas take pride in as students.

