When I started college in 2021, artificial intelligence felt like something distant. We knew it existed, but it did not seem relevant to our daily lives. We thought of it as something used by big tech companies or scientists, not something that would change how we studied, worked, or communicated.
That changed in my sophomore year.
On November 28, 2022, I gave a presentation in my English class about the environmental impact of AI. I had researched how large language models used massive amounts of energy, and how running AI systems at scale could contribute to carbon emissions. Challenging the question around large language models and energy use, I asked whether the benefits of AI were worth the costs.
Among the thousands of other topics I could have chosen regarding environmental
sustainability, I noticed that most people in the room were unconvinced, largely due to the limited number of reputable case studies and articles available at the time. It felt like I was talking about something that belonged to the future, not the present.
Then, two days later, ChatGPT came out.
By early 2023, ChatGPT and open source AI assistants had become part of daily student life. Some people learned how to use it to elevate their work, but some leaned into it completely. Most people were somewhere in between. It helped with brainstorming. It could explain tough concepts more clearly than some textbooks, although it still had its flaws.
The speed of the shift surprised me. Within months, AI tools had gone from a curiosity to something almost everyone used. But the academic system was not ready. It was not until my senior year (August 2024) that most syllabi began to mention AI policies. Even with that, the academic integrity rules were still unclear. Some professors banned it. Others encouraged it. Many said nothing at all. There was no shared understanding of what was allowed or what counted as learning.
While doing research for my sustainability essay for English class, I developed a strong interest in artificial intelligence and have since followed its evolution closely. That early presentation in 2022 stuck with me. When I saw how fast AI was spreading, I wanted to understand more. I started following AI news more closely. I read about updates to open-sourced AI models, new models being released, and how companies were beginning to use these tools in hiring and product design.
It became clear that AI was not just changing school. It was changing work too. Companies were adding AI tools to their workflows. Jobs in writing, design, support, and coding were starting to shift. Some tasks were becoming faster and easier. Others were being automated completely.
Witnessing the evolution of AI firsthand post-graduation and seeing how companies plan to incorporate it into their future strategies, one key takeaway stood out to me: AI must be people-first. While it holds the potential to streamline processes and improve best practices, its rapid development has outpaced our ability to fully assess its reliability.
As I start my career, I know I will continue to use AI tools. However, it is important to keep thinking about the choices we make with them. These tools are powerful, but they are not perfect. They reflect our values, our assumptions, and our blind spots. They can help us work better, but they can also make it easier to stop thinking.
College in the age of AI taught me to stay curious, careful, and keep asking questions. That is a mindset I plan to carry with me, no matter where technology goes next.
Sara Kate Jacobs – Research Intern





