top of page

AI Takeover: Reshaping the Michigan Experience
 
Fall 2025​

Screenshot 2025-11-24 at 10.25.55 PM.png
Facing the Truth: AI Implications for the Environment  

By Gabriela Gonzalez

 

The invention of artificial intelligence has been celebrated as one of the most tremendous leaps of our time, with the potential to reshape industries and accelerate innovation at unprecedented levels. We continue to marvel at AI’s capabilities to compose technological innovations and help adapt to new information, yet beneath this glowing progress lies a quieter, more unsettling truth. With every chatbot conversation, every algorithmic advance, and every model trained in the cloud, consequences lurk beneath the surface. As we evolve towards a future built on technological advancement, we must pause and ask ourselves: are these tools of progress ultimately damaging our own water resources?

​

We will continue to see a rise in data center development and energy consumption, particularly as institutions like the University of Michigan integrate AI tools across their platforms. For example, students and faculty are now being provided with their own AI assistance to improve their learning, such as UM-GPT and UMich Gen AI. As the Generative Artificial Intelligence University of Michigan website notes, “U-M is proud to be the first university in the world to provide a custom suite of generative AI tools to its community.” Nevertheless, this progress comes with significant environmental costs. Just one large-scale AI model such as ChatGPT or UM-GPT, can consume hundreds of thousands of liters of freshwater during training alone, primarily due to the electricity required to power GPUs and maintain server reliability. According to the Environmental and Energy Study Institute (EESI), the more electricity a data center uses, the more cooling is required, and “cooling systems often use vast quantities of water to keep equipment from overheating.” In regions already experiencing water stress, this level of freshwater depletion raises serious concerns about long-term sustainability and resource allocation. According to EESI, a typical data center can use between 1 and 5 million gallons of water per day — roughly the same amount as a town of 10,000 to 50,000 people — highlighting the scale of the challenge as AI infrastructure continues to expand.

​

The rollout and promotion of AI platforms at the University of Michigan exemplify a much broader trend. The ongoing projects, such as Michigan's new multi-billion-dollar data center facility for OpenAI’s Stargate project, was announced by Gretchen Whitmer. However, the current data centers across the country, including Michigan, raise growing concerns about AI’s rapid expansion and, naturally, the environmental impact that accompanies it. As more organizations rush to build large-scale facilities to meet the increasing demand for AI, we must start addressing emerging issues related to sustainability, energy fairness, and long-term infrastructure planning.

​

Justin Shott, the director of U-M’s Energy Equity Project since February 2021, has been at the forefront of conversations about the environmental and social impacts of artificial intelligence. He emphasizes that “the biggest issue is the speed at which artificial intelligence is moving across generations,” referring to the rapid development and expansion of data centers nationwide. While this pace signals technological progress, it also generates significant challenges. Shott argues that without transparent communication and a deliberate slowdown in data-center construction, society will not have the time or capacity to fully address the risks these digital infrastructures pose. He urges stakeholders to look beyond what appears efficient on paper and consider the broader costs: both to people and to the planet.

 

As technological advancement continues, the University of Michigan aims to position itself at the forefront of this progress. However, this creates tension between prestige and accountability. As Shott states, “U-M is generally agnostic about the ethics of research and the communities it impacts. They will follow official laws and regulations but turn a deaf ear to community protests.” 

 

This implies the university's stance: compliance over conscience. For administrators, billion-dollar projects like data centers are not just infrastructure but are considered highly appealing, similar to being connected to the future of AI: “U-M will use its muscle to move data centers forward because the cachet far outweighs the harm to communities in the eyes of administrators,” Shott added.

 

While the University aims to establish an image of innovation, it also faces the consequences of the community. Yet, despite all the excitement, it is crucial to strike a balance between curiosity and caution to ensure that the University of Michigan's embrace of AI not only represents progress but also considers its environmental impact. AI is not slowing down, and it will continue to grow. Nonetheless, our awareness of its effects needs to catch up. It is time to look forward, not just in terms of technological advancement, but in how we can come together to protect one place that sustains us all: the Earth. 

​​

AI in the Classroom
By Jackson Coates
​

Artificial intelligence has taken the academic world by storm as the writing capabilities of ChatGPT and other AI software have complicated university policies across the globe. The big question remains: How are the professors at the University of Michigan responding to AI? I have read through the syllabi of many of my classes, as well as through syllabi of classes I am not in, and I compared each professor's policy on AI. While this is not a comprehensive examination of AI policy across campus, I believe it reveals an intriguing nuance to the problems of AI. The University itself does not impose any policy regarding AI on professors, instead leaving it up to each professor and department to make their own decisions regarding this new technology.

​

In POLISCI 140: Intro to Comparative Politics with Dr. Brian Min and ENGLISH 229: Professional Writing with Professor Ethan Voss, the use of AI is not discouraged, but instead the benefits and utility are noted. However, the professors also point out that AI often provides incorrect information for more complex tasks asked of students in these subjects, such as comparing themes and developing accurate, coherent papers. These classes permit, and even encourage, students to use AI as a resource to help in the writing and research processes. This use of AI is contingent on students being critical of the information they receive, and above all, transparent in their use of AI. Both syllabi mention that AI usage must be noted, or else it is considered plagiarism and will be treated as such according to the rules on academic misconduct. 

 

Though different areas of concentration, PHIL101: Introduction to Philosophy with Dr. Tad Schmaltz and PHYSICS 140: General Physics 1 with Dr. Yuri Popov have a similar approach in their AI policies. Both of the professors heavily doubt the accuracy and effectiveness of AI in their fields. The syllabi claim that AI is unable to properly work through philosophical thought or complex physics problems, respectively. In light of this, both subjects heavily dissuade students from the use of AI, as it will likely impede their learning and give them incorrect information. 

 

Interestingly, my ECON 102 class, Principles of Macroeconomics with Dr. Ed Cho, makes no mention of AI usage in the syllabus. This course does not have many writing or physical assignments to turn in, which may be a reason for why Dr. Cho chose not to include it. Perhaps my professor simply sees AI as just another resource for the classroom instead of a tool that compromises the learning experience of students. 

Many classes that primarily engage with writing, such as the Political Science and English courses, acknowledge the widespread use of AI and encourage students to use it responsibly. These courses warn of the dangers of AI, but also advertise the benefits, giving examples of how to properly utilize AI for students to bolster their writing. The idea of plagiarism is emphasized throughout the syllabi. Although the work does not involve copying from another person, the information still does not originate from the student’s own ideas — a distinction that holds significant importance for these professors.

 

Meanwhile, disciplines in which AI fails to produce accurate and reliable answers, such as high-level math and philosophy, are staunchly opposed to AI. These professors tend to view AI as misleading and harmful to students’ learning. Philosophy is an especially unique case because it seems fundamentally impossible for an AI platform to engage in philosophical thought, even though it could technically provide ideas from documents that other philosophers have written. These courses are much more critical of the capabilities of AI, and as a consequence, they impose stricter regulations on the use of AI. 

 

Finally, some subjects have yet to broach the topic of AI, such as my economics course. It will be interesting to see how classes that are at the moment, not seemingly directly impacted by the use of AI, will adapt and form their own policies. There is not one unified stance on AI in the classroom at U of M, which presents its own set of pros and cons. On the one hand, it allows students to utilize AI in a healthy environment, just as with any tool like a calculator or laptop. This policy also allows for professors who do not trust the effectiveness of AI, or are simply concerned with academic integrity, to place their own regulations on the use of AI in the classroom. Abuse of this developing technology can be harmful to students' learning as they come to rely on a technology that is not always correct. This inherent danger to learning is exactly why professors are hesitant about AI usage, but the potential benefits of such a groundbreaking technology are equally apparent. As both students and professors alike navigate the changing academic landscape, it is vital that we, as an academic community, learn to adapt to new technologies without compromising our ability to think critically and rationally on our own.

​

Learning in the Age of AI: U-M's Go Blue AI Vision
By Ruoning Fang
​

In February 2025, the University of Michigan launched Go Blue AI mobile, an artificial intelligence platform developed entirely within the university’s infrastructure. The platform provides numerous U-M-specific features, ranging from bus schedules and student organization information to campus building details, and it aims to simplify daily tasks for the campus community.

​

With the goal of removing barriers to AI access that often exist in commercial platforms, such as ChatGPT or Gemini, which require paid subscriptions, Go Blue AI seeks to make AI more equitable for the U-M community. It is available at no cost to all students, faculty, and staff, built with strong privacy assurances. U-M’s Vice President for Information Technology and Chief Information Officer, Ravi Pendse, explained in an interview that the platform allows “the campus community to use popular language models entirely within U-M’s secure environment. Nothing shared with the app leaves the university system, and no data is used to train outside models.” Pendse emphasized that the intention behind this design is to give the U-M community a safe and practical way to begin using AI, experiment with new ideas, and learn about the technology on their own terms. 

​

In the digital era, AI has become an influential and unavoidable part of our campus community. As one small wave on the surface of the digital ocean, Go Blue AI’s mission at its core is a larger pedagogical aim to foster AI literacy. “This is still the early stage of our AI journey,” Pendse said. “Our goal is to foster AI literacy across campus so that everyone can build the skills they will need in the years ahead.” 

 

The rapid rise of AI has also raised concerns about growing inequality. Not everyone has equal access to commercial AI tools, and the gap in who can use these technologies can deepen existing differences in digital literacy and access to information. By providing a free and secure platform, U-M hopes to reduce these disparities and give every student the opportunity to learn, experiment, and engage with AI on their own terms. The platform allows students to explore and decide how they might incorporate AI into their own lives. Instead of feeling intimidated, turning away without trying, or judging it based only on limited experiences, students can experiment with the technology in a safe environment. Pendse explained that Go Blue AI is meant to be a bridge that helps students connect with new technologies to discover how these tools may support their learning and daily routines.

 

The early student response reflects the intention Pendse described. Students who know about Go Blue AI have begun using it and are gradually integrating it into their daily lives. One anonymous junior majoring in business said she finds Go Blue AI genuinely helpful. As an exchange student, she especially appreciates how the platform explains Michigan’s protocols and traditions, especially when she uploads event posters. For her, it has become a reliable source of campus information and has taken on some of the responsibilities that orientation leaders or student guides traditionally carried when introducing new students to campus life. She feels that Go Blue AI helps bridge the gap between new and returning students , making it easier for her to understand the campus. 

 

While emphasizing the important role AI now plays on campus, Pendse also reminds us that AI is not a shortcut for thinking, but a tool that helps us think better. He explained that although AI can help users think more effectively, they remain the ones making the final decisions. Pendse also noted that “many people are excited to use AI and want to do so in ways that are private, responsible, and genuinely useful to their work.” 

 

The team hopes that students, faculty, and staff will “see Go Blue AI as a learning partner that helps them think more clearly, explore ideas, and simplify everyday tasks.” They believe the platform should guide users toward the goals they set for themselves rather than becoming an excuse to avoid effort or creative thinking. They envision AI as a tool that provides helpful guidance and information, not a one-step caretaker that makes decisions on a user’s behalf. 

 

It is also worth noting that the platform was not created in isolation. The ITS Emerging Technologies team worked closely with students, faculty, and staff to shape the platform and gathered feedback through surveys, focus groups, and direct conversations. It is a product that reflects the heart and effort of many people, and Pendse also expressed his appreciation for the ITS Emerging Technologies team and their work in bringing the app to life.

 

Go Blue AI can be seen as the starting point of how students will develop their own ways of using AI. It serves as an experiment for the U-M community, allowing people to explore the technology while standing on the shore of the digital ocean and watching the waves of new trends approach. Pendse believes that the meaning of Go Blue AI will ultimately come from the community itself and wants to see how students, faculty, and staff choose to shape it in their own unique ways.

​

Reflecting on Our Experiences with AI

Writers Jackson Coates, Ruoning Fang, and Gabriela Gonzalez reflect on their experiences with Artificial Intelligence (AI). 

 

Introduction: AI is a new and rapidly developing technology that has taken the world by storm. This article stands not as a bastion for or against artificial intelligence, but instead looks to inform people of how it is being utilized and what the future holds for this unprecedented technology. We believe that having more discussion about the positive and negative effects of AI will be beneficial in navigating an uncertain future. By sharing our personal experiences with AI, we hope to start a discussion about the use of AI within our community on campus and beyond.


 

Jackson Coates:

During high school, my teachers said AI use would constitute cheating, so I strayed away from it. As I entered college and discussion around AI became more prevalent, I was forced to revisit my stance. An especially formative experience occurred in my freshman year robotics course when my professor encouraged us to utilize the University's AI tools to create a set of code. This was my first formal experience with AI, and the encouragement from my professor confused me, as I had previously thought AI to be taboo within the classroom. I soon noticed many professors are not only talking about AI, but approving of its use, in moderation of course. I now use AI to help me while I am studying, such as explaining why I got a certain question wrong on a past economics exam. I no longer had to pour through weeks of notes to find one multiple choice answer. In this instance, AI made my studying more efficient while not compromising my education.

 

Gabriela Gonzalez:

Nobody introduced me to AI, I just happened to stumble upon it when I overheard one of my classmates talking about this tool that can generate information in seconds. I was curious, so I looked it up online. This tool was called ChatGPT and it changed the way I viewed my work and my assignments. As a freshman adjusting a college-level courseload, it seemed like a miracle to come across a platform to help me manage the number of assignments that were slowly and heavily building up on my Google calendar. However, it always left a pit in my stomach, as it felt like I was cheating the system in some way. I began thinking more about the ethical aspects of AI usage. How could I use AI without feeling guilty when turning in my assignments? 

​

Rather than brushing it under the rug, my teachers started acknowledging that AI existed and had a place in the classroom. I slowly started building skills to use this new tool efficiently. It has helped me develop outlines and organize thoughts when an essay gets too crazy. It created problems and solutions to help me earn an A in my Statistics course. While AI can sometimes be used unethically, it's up to us to learn how to control it rather than abuse it. 

 

 

 

Ruoning Fang:

I still remember the shock and surprise I felt when one of my friends from Argentina showed me how she used AI to help with her emails. She showed me how to write an email that felt warm and personal through AI, using it to check her tone in English as this was not her first language. At the time, I had just started college on another continent. It used to take me so long to write just one email. I struggled to understand tone; I could not tell whether someone was being formal, distant, friendly, or encouraging. I had never lived in an English-speaking country before, and had no real exposure to how tone worked in daily messages or emails. The day I learned how AI could help with this, I created an account.

​

Later, I started using AI to help me sort through emails and figure out which ones were important or personalized. When I felt confused in class about cultural references or slang, I turned to AI for help. Sometimes, when texting peers, I even used AI to check whether certain slang, idioms, or emojis made sense in the situation to ensure my message was not strange or unreadable. I felt like I was becoming a copy of the data that AI collected, analyzed, and presented to me, and I no longer felt original. It was upsetting to think that AI was being used to translate my personality. 

​

One day, when I told that friend about these feelings, I remembered how she suddenly burst into laughter, and a fire lit up in her eyes in agreement. She felt the same, but had always believed it was okay. We are not losing our personalities, she told me. We are finding them again through the process of translation, with the help of AI. That moment stayed with me, as I realized that the act of translation is not about replacing who we are: it is about uncovering the version of ourselves that can be seen and understood in a new language, a new culture, or a new setting. AI does not send messages for us. It does not reply to emails or read texts for us, we are the ones doing all of that. We are the ones asking the questions, thinking through the options, and deciding what to say. We are not giving up our voices. Instead, we are learning how to shape them in a way that others can hear. AI can help us express who we are more clearly across languages and cultures.

match, which will hopefully encourage more in-person interaction in an age dominated by social media. 

  • Instagram
  • LinkedIn

© 2024 Consider Magazine

bottom of page