top of page

Back to the Future:
Generative AI in Higher Education and Beyond

At Thomas’s Ham and Eggery Diner, time seems to stop.

​

Since 1946, the family-owned restaurant has maintained its homey, intimate charm. Familiar waiters glide across scuffed up tile floors, delivering omelets and pancakes in oversized cast iron skillets. The checker-tiled walls are covered in family portraits, youthful art projects and stickers from tourist destinations across the country. Rave reviews from now-defunct local newspapers surround eager customers waiting in the diner’s foyer, flipping through the never-changing menu and fiddling with the gumball machine to pass time. Situated on a busy turnpike, across the street from a strip mall and next to a smoke shop turned AT&T store, Thomas’s is an oasis of love and comfort amidst Long Island’s highly corporatized landscape. The large neon sign on the diner’s awning invites hordes of families and couples and friends to gather in this place where history lives.

​

I first started going to Thomas’s in elementary school with my dad, who sought after the diner’s unique escape from a life lived mostly in chain-restaurant-laden suburbs. Over years of short conversations between seatings, we watched the host Alec graduate and get engaged and have children. A decade of huevos rancheros with a side of rye toast later, Thomas’s has become what my grandma deems “our place.” Everytime I come home from school, my grandma (to my grandpa’s dismay) drops my grandpa off to get in line for a table as she takes the six minute drive to pick me up. A radio station playing “Hits of the ‘60s” blasts in her car, a testament to the storied past she is committed to savoring. 

​

After being reminded by my grandma “how adorable he is,” Alec seats us at “our table,” nestled in the corner beneath a vintage poster advertising Yosemite National Park. The laminated red booth we’re seated in is aggressively uncomfortable and yet the safest place in the world. I sit across from my grandparents, aged 79 and 84, as they lightly interview me about my life in college and intertwine their own anecdotes about travel and love and what it means to grow up. Inane conversations about “how it’s going” inevitably end with light tears shed on both ends, a reflection of my family’s chronic sentimentality. 

​

As my grandpa tries to flag down “the waiter with the accent” (not my wording, theirs), my grandma asks me about school. I tell her I’m working on my biggest journalism project yet, which I love but am also pretty intimidated by. I explain that I’m writing about this thing she definitely hasn’t heard of called ChatGPT. 

“Chad? G B D?” she asks. “I want to understand.” 

​

She pulls out her many-models-old iPhone and slowly attempts to type “Chad G B D” into Safari. I try to take her phone to help her speed up what would inadvertently turn into an hour long ordeal. But she pulls back, insisting on trying to find it on her own.

​

“It’s hard to explain,” I say. “And it might be hard to use on your phone. But it’s basically like a Google search that writes a response back to you, instead of just giving you search results. It can write essays, tell jokes, make art, and solve complex math problems. It can ace standardized tests like the APs. It came out in late 2022, but Michigan became the first university to make their own version of the program in the fall. I don’t really know how it works, but I know it’s a big deal.”

​

“Never heard of it, sweetie pie,” my grandpa smiles, fidgeting with the Lipton tea bag he had been keeping in his back pocket. 

​

My grandparents live in a 65+ gated community down the road on (literally, you can’t make this up) Corporate Drive. While my grandma spends her days playing tennis and MahJong, my grandpa attends what is called "men's current affairs club.” While I’d be terrified to know what gets talked about in those meetings, I imagine they haven’t discussed the merits of large language models—a category of language-generating machine learning programs that includes OpenAI’s ChatGPT. 

​

It struck me that, despite our emotional closeness, my grandparents and I live in two vastly different worlds. While I would soon return to watching my classmates use ChatGPT mid-lecture, my grandparents would remain on Corporate Drive, completely unaware of the technological transformation happening right under their generational nose. They have endured the birth of the Internet and the frenzied popularity of the iPhone and yet completely missed this critical nexus in the next looming era of communication. The overwhelming plethora of think pieces and research articles seem to operate at a speed beyond their cinnamon-scented, quilt-filled home where my grandparents remain in their quaint lane of what they already know to be true. 

​

As a self-committed “humanities person,” I am hard pressed to find a way to explain ChatGPT that wouldn’t make my Computer Science major roommate pitifully roll her eyes. I know it’s unique in its ability to produce original text based on all of the information available on the Internet up to a certain date. I know it’s a neural network, meaning it synthesizes information through processes that mirror mechanisms of the human brain. I know it can be really good at solving coding bug issues and often pretty terrible at creative writing tasks like screenplays. And I know for sure that the University of Michigan is using these tools a lot. A Michigan Daily poll found that almost 60% of students were using ChatGPT and 93% agreed the University should not ban the technology. 

​

In September of 2023, the University pioneered its own UM-GPT and supplemental tools like the image generation program Dall-E. These Michiganized versions use the same back end model as OpenAI, while the front end is customized for the academic and technological needs of students and faculty. UM-GPT, for example, has access to all instructors’ Canvas pages, allowing it to create customized study guides based on course content only accessible to internal users. They also ensure conversations with these programs remain completely private and eliminate any cost barriers associated with the “premium” versions of external generative AI platforms. 

Being the frontrunner of University-specific GPT programs aligns with Michigan’s insistent “leaders and best” mantra—a frenzied race to set examples before other schools have the chance to catch up. An Inside Higher Ed article opens with the celebration of Michigan’s information and technology department, noting how other schools like Harvard and UC San Diego have followed computational suit. 

​

Don Lambert, the Director of Emerging Technology and Artificial Intelligence Services at the University, spent the summer of 2023 developing these tools with a small but enthusiastic team of programmers. In his 28 years at the University, this summer was the only time Lambert and his employees were completely dedicated to just one project. 

“University leaders all realized early on, in December of 2022, how disruptive these new generative AI tools are going to be and how they were going to have an impact not just on education but across the board in many disciplines,” he told me.“They wanted to position the University to have access to these tools, be able to do research, to understand what it means to teach in an era when students have tools that can pass the BAR and the MCAT. Because that is going to change how we deliver instruction at the University.”

​

Sitting with Don on Zoom—yet another communication cornerstone of my collegiate years—his impassioned recountings pointed to the University’s open-minded approach to generative AI tools like UM-GPT. While policies for individual professors vary from prohibitionist to celebratory, the sheer existence of UM-GPT is an invitation to welcome this novel technology into learning environments across campus. It’s also rooted in the assumption that this programming will be continuously relevant in a variety of post-graduate professions.

“We want to make sure that we can properly instruct students about the use of these tools because they’re going to be expected to and want to use them when they get out into the industry world,” Lambert said.

With 15,000 unique visitors to the UM-GPT site each month, Lambert and his team have clearly spoken to an innate, exploratory desire within the campus community. And many instructors (mine included) have incorporated this trend into their educational approaches.

​

In my Environmental Journalism class, creativity is a central tenet of our final project: a longform news feature article chronicling a climate or public health issue of our choosing. In my many prior English classes at Michigan, AI tools were never even mentioned. So when my professor, Emilia Askari, required us to use UM-GPT for our projects, it seemed to go against the solitary culture of writing I was used to hearing about in long seminars about literature and authorship. Our first AI-ed assignment had us ask UM-GPT for potential interview sources and secondary research articles. The second fed our first drafts into UM-GPT to be given edits and advice. The final task prompted the program to write its own version of our drafts based on the original assignment description. We were then asked to reflect on how the tool was helpful and where it fell short. 

​

My results were a mix of concerningly comical errors and somewhat useful advice. Many of the sources it suggested were either seemingly made up, like the “Laura Hall from the RAND Corporation” that was nowhere to be found online. Others seemed both blatantly obvious and likely unattainable for a student journalist, like proposing I interview Governor Gretchen Whitmer. When asked to write its own news article on my topic, sub-headlines like “Why You Should Care Now” and “The Statistics That Hit Home” were uncreatively taken directly from the prompt I fed it. However, I was pretty impressed with the writing advice it provided about flow, structure and clarity, such as recommending spots where I could improve my transitions and expand scene descriptions. I was surprised by the program’s ability to make creative judgments like advising me to avoid subjective statements and make my introduction more concise.

​

Askari, a freelance journalist with a PhD in educational technology, lives in two parallel worlds: an industry threatened by the rise of generative AI and a field designed to welcome it. She is also the kind of professor who consistently answers Slack messages within mere minutes day or night, weekday or weekend. Her fervent passion for teaching is reflected by the numerous conferences she attends about the use of generative AI in both journalism and education. In the shadow of shuttering newsrooms and painstaking layoffs across the news industry, Askari continues to fight for the future of journalism through training the next generation of reporters and news consumers. 

News organizations have slowly begun to incorporate AI into their everyday work with varying degrees of success. Buzzfeed, who shut down their news division last year, released over 40 travel guides for different countries that repeated the same phrases (“hidden gem,” “now I know what you’re thinking…”) over and over again. The science and technology news website Futurism took notice, uncovering Buzzfeed’s quiet use of AI to write bland, meaningless “content.” Like UM-GPT advising me to interview a fake person, Buzzfeed’s failed attempt to leverage AI represents the technology’s current inability to adequately replace real human efforts. 

​

Rather than dismiss the tool as a threat or nuisance to her work, Askari sees the merits and shortcomings of generative AI as essential knowledge for students entering a post-ChatGPT workforce. Instead of relying on tools like UM-GPT to circumvent long assignments, Askari believes generative AI can push her students to think critically about human work and creativity.

​

“I think it’s incumbent on us as human beings to show how we are going to ask the better questions, how we are going to use this tool in ways that amplifies what special human takes we bring to the tasks we’re doing,” she said in our interview over Zoom. 

​

Askari’s expertise has led her to steer into the skid, embracing the potential and momentum of this revolutionary technology while being wary of its limitations. Across disciplines at the University, many instructors have also followed this path of cautious optimism. 

​

Thomas Walker, an instructor for sections of English 125, asks his students to reflect on what it means to learn a language and learn to write in the context of ChatGPT’s affordances. He was surprised when many students lamented the restrictions of the technology, citing the same dullness that led to Buzzfeed’s public humiliation. Nonetheless, Walker believes the tool will continue to improve and adapt, highlighting his aim to carefully integrate generative AI into his predominantly writing-oriented class.

​

“We don’t want people to feel like they can short circuit learning and practice processes that happen in classes by substituting just hitting a button,” he said. “So it’s about finding ways to articulate an ethical and practical way to accomplish our goals at the University that incorporates such tools. And identifying when you’re using them to get at these goals and when you’re using them to short circuit them.”

​

Professor Stephanie Moody, who teaches a one credit course called “Writing with ChatGPT” and numerous other writing classes, shares Askari and Walker’s style of critical welcoming. They permit their students to utilize ChatGPT (or, interchangeably, UM-GPT) at any stage of the writing process under two guiding tenets: students have to be explicit about how it was used and produce final work that exceeds the capabilities of generative AI (ex: including a lived personal anecdote or information from a lecture slide). They hold students responsible for what they turn in, advising them to ensure they understand what Chat or U-MGPT produced before handing work off to be graded.

Moody’s “Writing with ChatGPT” course asks students to reconsider what originality and creativity look like among the presence of easy automation. Readings about why the tool generates fake references and how the tool is used across linguistic groups expand students’ technological literacy. In Moody’s classes, generative AI works in tandem with—rather than against—their students. 

​

“There are many ways that we can be creative. There are many sources from which creativity comes and I think, ideally, ChatGPT could be one source where we get our creativity. Looking at ChatGPT and saying I can be more creative than this or I can do better is really useful for students. So we can use it as a creative tool to help our writing but we can also sort of use our writing comparatively with what ChatGPT is putting out to see what the algorithm did and then what we can do,” they said. 

​

Amidst these encouraging voices are those who lie on the opposite end of the pedagogical spectrum. The syllabus of Psych 466—”The Origins of Moral Behavior” taught by Professor Felix Warneken—reads:

“No ChatGPT or other Al help. According to ChatGPT, this is what I should tell you: ‘In this university seminar, the use of ChatGPT is not permitted as it does not provide the depth of understanding and critical thinking required for academic discourse and learning.’ Who can argue with that?”

​

Warneken’s quippy approach to banning the tool stands testament to the darker underbelly of technological transformation. For all the ways generative AI could be helpful or at least harmless to students, there are logical frustrations and fears from educators like Warneken. While there is no University-wide policy towards generative AI tools in the classroom, Warneken is surely not alone in his approach; the Stamps School of Art and Design, for example, prohibits the use of AI for their schoolwide portfolio project. A Michigan Daily article found professors in everything from Anthropology to Arabic enacted policies similar to those of Psych 466. 

​

If you had asked me how I felt about generative AI a few months ago, I would have immediately sided with Warneken. When ChatGPT first came out, I was studying abroad in Amsterdam—thousands of miles from the panicked Michigan administrators trying to develop their course of action. I watched as my friends softened the culture shock of their difficult Dutch courses through ChatGPT’s ability to half heartedly complete their assignments in seconds. Sitting in the small booths of our dorm’s lobby, asking the program to tell us a joke or write us a song became a memorably culturally specific pastime. 

​

While messing around with ChatGPT proved a solid way to spend an otherwise empty Tuesday evening, I never quite caught onto leveraging its academic capabilities. A creature of habit, I continued to prefer the lengthy outlining and research routine I had developed throughout my collegiate career. I knew intrinsically that using ChatGPT could allow me to be increasingly efficient and give me the space to prioritize more complicated tasks. But my stubborn adherence to my own practices allowed me to mostly ignore the tool’s educational applications—until I enrolled in Askari’s class. 

​

Hearing Askari preach the relevance of these tools bred a reconsideration of my avoidance. Her commitment to our understanding of generative AI allowed the class to transcend the stereotypically academic into the overtly forward-thinking professional.

​

“Many of the experts are saying that this emergence of really accessible generative AI is going to be a change on the level of the creation of the Internet. I don’t think we can completely avoid it,” she said. “And I think it highlights the ridiculousness of grades and our grade oriented culture. And highlights the importance of really developing internal motivation in all of us to keep learning and doing our best for reasons beyond whatever evaluation.”

The transformative nature of AI’s presence speaks to the complexities of coping with immense technological change. While Chat/UM-GPT as they currently stand cannot catch up to the majority of human processes, the inherently unknown fabric of the future will inevitably unwind. And in the fast-paced, overwhelming ecosystem of resume scanners and cutthroat internships, students are vulnerable to the throes of a job market operating within this rapid uncertainty. 

​

Professor Walker identified the unique turbulence of our time and the constant uprooting of once dependable vocational skills and paths. 

​

“Hundreds and hundreds of years ago, you could reasonably expect that your parents could teach you a craft and it would be applicable and useful to you and your children,” he said. “And then starting a number of decades ago, you learned a craft but it would mostly be obsolete by the time your children learned it. But it would still be there for the whole time you were working. And now I think we’re entering this new period where you learn information in college and it may be useless by the time you’ve been in your career for a year or ten years. And that’s a part of this that is new.”

​

This double-edged sword of innovative progress and latent fear has sparked diverse dialogue across a campus facing a technological identity crisis. Even Professor Moody, with their committedly open approach, harbors the push and pull of generative AI’s increasingly pervasive role in their classroom.

​

“I have this kind of pendulum swing response [to UM-GPT] that goes back and forth. On the one hand, I think we haven’t quite had something like this. This is a huge technological shift and it’s very hard to predict what that will mean going forward, because AI is a special beast. Then there’s another part of me that remembers the panic that instructors felt over the internet. And this sense that ‘Oh, students are all going to cheat and they’re not going to learn anything and now that they have Google, what do they need us for?’ So there’s this very parallel sense that we’ve been here before and that technology often upends and changes education,” they said. 

​

Faculty and students alike face this intricate straddle between our utopian and dystopian instincts. International Studies Senior Martha Lewand has written for The Michigan Daily for three years. In line with my own writerly premonitions, Lewand is both aware and avoidant of generative AI’s role in her educational environment. 

​

“I’m trying to tell myself ‘Okay, Martha you need to use this because this is the new tool. This is the future of work. This is the future of academia.’ And it would probably be more to my benefit to learn to utilize it in an efficient manner. But I’ve still been hesitant, especially coming from a journalism background. All these places are going under, like Sports Illustrated as the most recent example, because people want to replace jobs with AI. So I’m afraid it’ll take away important jobs that humans need to do. I lean more towards scared than excited,” she told me over the phone.

 

After many nights spent together at The Michigan Daily newsroom, Martha and I share the inclination to protect the creative agency we’ve cultivated from the looming threat of generative AI. But for incoming Oakland University medical student Jenna Silverman, tools like ChatGPT are a mere add-on at most to what she sees as an invariably stable profession.

​

“ChatGPT and AI are going to be present for decades to come in medicine,” Silverman said. “Healthcare professionals might be able to work more efficiently, but nothing is going to replace human touch and there will always be the need for a physical presence in a room with a patient. It’s nice knowing [generative AI] will never replace the field I’m going into.”

​

Silverman, a former student of Psychology Professor Warneken, shares his ambivalence towards the prophetic buzz surrounding generative AI on campus and elsewhere. After being Silverman’s roommate for the last year, I have witnessed how semesters filled with long, tedious study sessions cemented her confidence in her medical future. 

“I haven’t used [UM-GPT] at all. The information for classes like Anatomy and Microbiology are pretty static and textbook-based. I mean, in order to become a doctor you need to know everything and you don’t want something feeding that to you. You need to put in the work yourself or else you’re never going to learn how to treat patients,” she told me. 

​

After our interview was over, Silverman almost immediately grabbed her backpack and went back to a Saturday night tapping through lecture slides. Sitting on the prop-like plastic couches in our living room, I was struck by how these varied perspectives on generative AI reflect the vantage points from which we approach the future. Silverman’s assuredness of generative AI’s passive impact matches her lifelong certainty of an eventual medical degree. Lewand’s interwoven fear and excitement seem apt for an uneasy student journalist with a penchant for exploring the world. Even Professor Moody actively modeled their attitude towards Chat/UM-GPT off the experiences they had decades ago with the Internet, mirroring the cyclical clockwork of innovation and its discontents. 

​

Generative AI tools are another installment of the industrialized world’s predisposition for technological revolution. Michigan’s class of 1993 encountered the birth of the World Wide Web. Just a month after the class of 2007 graduated, they were introduced to the iPhone. And now, the class of 2024 will enter a workforce anxiously anticipating the forecast for tools we are only just beginning to understand. While my grandparents are probably sitting at Thomas’s still trying to figure out how Siri works, I am now back at school, scrolling through the few journalism jobs available on LinkedIn and wondering if it’s possible to keep this profession alive long enough for me to be a part of it. 

​

But as hard as we may try, we cannot answer to a future we are mostly unable to predict. We can guess and check and ruminate and wait and we still, despite our best efforts, are almost never given the impossible privilege of knowing what comes next. It’s the underlying emotional nausea that Martha, Jenna and I—along with every fellow almost-graduate—can attest to. We want so desperately to make the right choices, and yet we sit in the middle of an uprising waiting with patiently anxious breath. 

​

At the core of the human condition is the habitual running towards progress, despite our inability to predict the impact of our actions. In every life stage we are involuntarily hurled into novelty as the world around us churns on, making small transformations from the cotton gin to Technicolor film to an AI tool that can generate “a painting of llamas flying through the sky while wearing prom dresses and also there are cookies and rainbows” in three seconds flat. We cannot know how these tools will shape us until we’re already deep into the fun house, distorted mirrors warping our sights and self-perceptions. 

​

But maybe the beauty of this futurism is that we don’t have to know the answer. We are most beholden to the futures we envision, the ones we create amidst an incurably uncertain present. What we lack in crystal balls we make up for in conviction; we can be the professor teaching about generative AI’s affordances or the student who sees no purpose in engaging with it. It is what we do with this blurry uncertainty that ultimately dictates the outcome of every technological metamorphosis.

​

Sitting across from my grandparents—a former home economics teacher and a plumbing supply salesman—on that cozy Friday, the imminent shifting of our educational and occupational systems became that much more obvious. While my grandma’s curriculum of sewing and cooking remain life essentials, a Salon article reported that only 4.5% of American K-12 schools still offer home economics classes. And my grandpa’s major, “Distributive Education,” is now likely only known to the librarians at his school’s historical archive. But despite the ever-widening gap between my grandparents and the progress surrounding them, they keep up an inspiring sense of wonder. Though they will likely never engage with generative AI in their lifetimes, they still wanted to know what this “Chad G B D thing” is and why it matters to someone they love. 

​

Weeks earlier, as my grandparents chattered along on our ride home, I was uplifted by their bend towards wholesome curiosity. It amazed me that, in spite of decade after decade of imperceptible change and aging wear, their learning never ceased. Much of their retirement has been spent traveling the world, emailing us blurry JPEGs of African safaris and European architecture from their archaic Hotmail accounts. Insistent on maintaining their spry youthfulness, my grandparents have seamlessly and authentically moved with the tides of every tomorrow thrown at them. 

​

Our conversation about UM-GPT was brief and relatively inconsequential, our focus quickly shifting back to the latest gossip about their friends’ granddaughters or advice about looking for a job (“Have you tried searching online?”). Though it left me with all the same unknowns about where this technology might take us, it was a reminder that the least we can do for ourselves and the world is to simply ask questions. We have to be willing to put our egos and fears aside and open ourselves up to the terrifying but permanent hum of the ever-impending future. We have to be the lovely grandparents sitting across from their naively introspective granddaughter in an 80 year old diner, whipping out our phones to discover what the hell she’s talking about. 

bottom of page