AI? Let’s Talk About It - HC Edition
Writer: Alysha Dahya
Editor: Isabel Snare
An Anti-AI Persuasive Essay
Note: Although ChatGPT is referred to most within this article, the information presented applies to most other GenAI models. ChatGPT is used because it is the most relevant model for our Havergal audience.
“Edit this paragraph about the Industrial Revolution using social, political, economic, and cultural historical lenses.”
“Write me an email asking my philosophy teacher for an extension for my essay.”
“How to make banana bread in three steps with no eggs.”
These prompts for the popular AI Chatbot and Large Language Model (LLM), ChatGPT, are likely similar to those you have used before. As a Havergal student, your busy life and endless academic demands have probably led you to use AI at some point, whether to write an email to Ms. Timusk to apologize for absences, to write a Canvas discussion post at 1 a.m. on a Tuesday, or to use Flint to study for a test. If this is you, you’re not alone; AI is infiltrating people's lives around the globe at a frightening pace. Not only is AI in LLMs like ChatGPT, Deepseek, Gemini, and Claude, but it’s quickly trickling into everything we use: Instagram, Grammarly, Notability, Spotify, and Gmail. In the next ten years, AI is meant to revolutionize every sector, from healthcare to art, but what many fail to consider are the repercussions of these technologies.
Background
First and foremost, what is AI? AI, or Artificial Intelligence, is the name given to technologies and programs that can process information and data to then mimic human output and thinking. AI has been around since the mid-twentieth century, but has only recently made breakthroughs due to the advancements in computer processing and data accessibility. GenAI, or Generative AI, is AI that specifically generates an output like text, images, music, or code, by learning patterns from common datasets.
In 2022, the generative AI chatbot ChatGPT was released by OpenAI. In just two months, the platform skyrocketed to 100 million users. Today, ChatGPT is one of the five most visited websites globally and boasts over 800 million users weekly. Since 2022, many other AI chatbots have been released, including Gemini by Google, Deepseek, and Claude by Anthropic. ChatGPT and other AI Chatbots are made up of two main parts: the LLM and the interface, also known as the engine and the vehicle. An LLM is an AI model that is trained on data to recognize and respond to human text and can perform many tasks like writing, taking notes, and translating. The interface is simply what you interact with and what connects the user to the complex LLM.
The founding company of ChatGPT, OpenAI, is an American AI research organization comprised of a non-profit foundation and a for-profit corporation based in San Francisco, California. Its mission is to create safe and easily accessible Artificial General Intelligence (AGI), which is defined as “highly autonomous systems that outperform humans at most economically valuable work”. Since the development of ChatGPT, Sora, and DALL-E, OpenAI has faced many questions pertaining to the ethical, social, and environmental challenges of AI. So let’s talk about it…
Environmental Concerns
A major downside to the development and use of AI is its impact on the environment and our freshwater supply. Just one email written by ChatGPT can use up to 500 ml of water; in other words, an entire bottle! AI uses so much water because the data centers in which the supercomputers are housed require lots and lots of it to cool themselves, and that water ends up evaporating (80%) or becoming polluted with toxic chemicals and unusable for drinking. Soon enough these data centers will end up consuming six times more water than Denmark (population of 6 million). As students at Havergal, we have been educated about why this could be a problem. As only 2.5% of water on the planet is drinkable, anything that reduces that amount significantly, like AI, threatens the longevity of our water sources. With the projected increase in AI and the development of new data centers, by 2028 AI will be projected to consume over 720 billion gallons of water, which is enough to supply 18.5 million households. With water scarcity threatening over 4 billion people (two-thirds of the population), this feels a bit… selfish?
Similarly, the energy consumption of AI has skyrocketed alongside its water consumption. The more AI models improve, which is constantly, the more their energy use goes up. Just one ChatGPT request can use up to ten times the amount of electricity that a Google Search does. Furthermore, the energy used to power these programs is largely coming from fossil fuels and other non-renewable energy sources. Norman Bashir, a fellow at the MIT Climate and Sustainability Consortium (MCSC) and a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL), says of this effect, “The demand for new data centers cannot be met in a sustainable way. The pace at which companies are building new data centers means the bulk of the electricity to power them must come from fossil fuel-based power plants.” In 2012, there were just 500,000 data centers, and today there are over 8 million—an exponential increase driven by AI development.
Intellectual Concerns
Another major concern about GenAI is the loss of intellectualism. AI is not always bad; it can streamline operations and cut costs in many sectors like healthcare, business, and research. However, when we become overdependent on AI, and it is available to all with access to the Internet, many problems arise. Firstly, AI makes a LOT of mistakes, so much so that certain errors have coined names. Oftentimes, you will hear people talking about AI's ‘hallucinations’—when AI makes up false information to fill the gaps in its training and spits it out to users as fact. These ‘hallucinations’, even when partially correct or not entirely inaccurate, pose a huge threat when AI is used in professional or research-based settings.
Additionally, AI can prevent us from using our brains in the traditional way in academic situations, which often causes our learning and development of critical thinking skills to be hindered. For adults and professionals trained in certain fields, there are benefits of AI to research and writing; however, for students who are meant to learn to write, research, and demonstrate critical thinking, AI can act as a roadblock to critical thought and an easy way out to shortcut learning. The difficulties posed by thinking, practicing, making mistakes, and then correcting allow us to develop the skills we need in the workforce that lead us to pursue receiving an education in the first place.
This assertion doesn’t mean that AI should never be used; it’s a useful tool, but only when used correctly and in moderation. For example, using AI to find a simple answer to a complex question can be helpful, but asking AI to perform all your research for an essay for you is harmful. When it comes to students being accused of using AI for their papers, there is no true way to tell if AI was used, as AI is made to mimic human writing styles. Students should always be given the benefit of the doubt as most papers written fully by AI don’t have human emotion, intellect, or style—and that is upon which assignments should ultimately be graded.
When it comes to the workforce, the emergence of AI makes soft skills like public speaking, people skills, creativity, and authenticity much more important, as most technical skills can be mimicked. Additionally, university majors in the humanities become very important, because they are based on both fact and on human thought and debate. The president and co-founder of Anthropic (parent company of Claude AI), Daniela Amodei, shared her thoughts on the newfound importance of humanities majors as an English literature major herself. In an interview with ABC News, she shared she has “zero regrets” about not majoring in a more technical degree. She states that LLMs are already very strong in many technical fields and that the value lies in understanding fields like history, ethics, and psychology. Amodei states, “In a world where AI is very smart and capable of doing so many things, the things that make us human will become much more important,” and that “I think the ability to have critical thinking skills and learn to interact with people will be more important in the future, rather than less.” Amodei also explains that Anthropic’s hiring process itself reflects those views, in that they hire people with a balance of technical skills and communication/people skills. They also value people who have a natural kindness and desire to help others. In 2025, the college enrollment nationally (USA) rose by two percent while the enrollment in computer science dropped by over six percent, reflecting a drop in the desire of undergraduate students to pursue computer science degrees. This point also leads us into discussion of the ethical concerns surrounding AI.
Ethical Concerns
AI models train on massive datasets—sometimes trillions of data points—and improve over time. But they are still prone to bias, often reflecting the flaws in their training data. For example, image-generation AI trained mostly on white faces struggles to accurately depict people of colour. Facial recognition tech can misidentify darker-skinned women up to 35% of the time, as opposed to less than 1% for lighter-skinned men. Bias isn’t limited to images—hiring algorithms trained on past data have favoured men or certain ethnic groups, like Amazon’s recruiting tool that downgraded resumes with “women”. This can reinforce existing inequalities.
Bias also comes from who owns and builds AI. Chinese models like DeepSeek show positive slants toward the government, censoring criticism. Many AI systems avoid criticizing their creators, subtly shaping opinions to favour certain views. This lack of neutrality can cloud how we understand complex issues.
AI also raises plagiarism concerns: over 60% of AI-generated text contains copied or closely paraphrased content without credit. This threatens creators’ rights and originality.
On top of that, AI fuels deepfakes and disinformation. Deepfake videos and audio can mislead, spread falsehoods, influence elections, and even incite violence.
AI’s role in military tech is equally alarming. Autonomous weapons risk deadly mistakes without human control. AI misuse can also include creating non-consensual sexual images, which violates privacy and causes harm. These risks call for urgent global management of AI’s power that will enable more responsible, and thus ethical, usage.
Social Concerns
Other major concerns surrounding GenAI are social ones, namely the forseen impact of these technologies on mental health, particularly in relation to suicide. AI chatbots and conversational agents are increasingly used for emotional support, but they are far from perfect. There have been troubling reports of AI failing to recognize or appropriately respond to users expressing suicidal thoughts, sometimes offering harmful or dismissive replies, raising urgent questions about the safety and ethical responsibility of deploying AI in sensitive areas like mental health.
Artists and creators are also feeling the pressure. With AI able to generate art, music, and writing instantly, many fear a devaluation of human creativity and craftsmanship. The ease of producing AI-generated content threatens to overshadow original work, making it harder for artists to sustain livelihoods and receive proper recognition.
Privacy concerns loom large as well. AI models often require massive amounts of personal data to function effectively, and this data can be mishandled, leaked, or exploited. The rise of AI surveillance tools and data scraping practices puts individual privacy at risk, sometimes without users’ knowledge or consent.
Finally, the fear of job loss is widespread. AI automation is poised to disrupt many industries, from customer service to manufacturing. While AI can create new opportunities, the transition may leave many workers displaced or underemployed, especially those in repetitive or low-skill roles. The social fabric must adapt to ensure that technological progress benefits everyone, not just a select few.
Alternatives and Solutions
While the concerns around AI are real and urgent, it’s important to remember that AI itself is not inherently bad. It’s a powerful tool that can drive innovation, improve lives, and solve complex problems—when used responsibly. The key issue of AI lies in its accessibility and regulation. Currently, AI is too open and alarmingly unregulated, allowing for misuse and unchecked growth that can harm both the environment and society.
One promising alternative is supporting AI projects and platforms that prioritize sustainability and ethics. For example, I have limited my AI use as much as possible, but if I have to use AI, I use Ecosia. Ecosia plants trees with its ad revenue and is exploring ways to integrate AI responsibly while maintaining environmental goals. Encouraging the development of AI systems that minimize energy and water use, respect privacy, and actively combat bias can help steer this technology toward a more equitable future.
Regulation also plays a crucial role. Governments, companies, and researchers need to collaborate on clear ethical guidelines, transparency standards, and accountability measures. This includes protecting data privacy, preventing discriminatory biases, and ensuring AI cannot be weaponized or used to spread misinformation unchecked. Public education about AI’s strengths and limitations can empower users to engage critically rather than to passively accept AI outputs.
Conclusion
AI is not the enemy—it’s a tool that reflects the values and choices of its creators and users. It can streamline healthcare, boost creativity, and open new frontiers in research. But the rapid rise of AI also brings significant ethical, social, and environmental challenges that we cannot ignore.
The problem isn’t AI itself, but how easily accessible it is without enough safeguards. If we want AI to serve humanity rather than harm it, we need thoughtful regulation, responsible innovation, and collective awareness. By balancing the benefits of AI with careful oversight and ethical considerations, we can harness its potential while protecting our planet, our communities, and our intellect.
Let’s talk about AI—not just as a technology, but as a reflection of who we are and the future we want to build.
Works Cited
ChatGPT. (n.d.). Wikipedia. https://en.wikipedia.org/wiki/ChatGPT
OpenAI. (n.d.). Wikipedia. https://en.wikipedia.org/wiki/OpenAI
Delbanco, A. (2023). The inside story of how ChatGPT was built from the people who made it. MIT Technology Review. https://www.technologyreview.com/2023/01/24/1067863/the-inside-story-of-how-chatgpt-was-built/
LLM vs ChatGPT. (n.d.). TechTarget. https://www.techtarget.com/searchenterpriseai/definition/large-language-model-LLM
Vincent, J. (2023). AI has an environmental problem. Here’s what the world can do about that. The Verge. https://www.theverge.com/23712033/ai-environmental-impact-energy-water-usage
Explained: Generative AI’s environmental impact. (2023). MIT News. https://news.mit.edu/2023/explained-generative-ai-environmental-impact-0110
Data centers and water consumption. (n.d.). Environmental and Energy Study Institute. https://www.eesi.org/articles/view/data-centers-and-water-consumption
Why AI's water problem might actually be an opportunity. (2023). World Economic Forum. https://www.weforum.org/agenda/2023/01/ai-water-sustainability-opportunity/
AI is accelerating the loss of our scarcest natural resource: Water. (2023). GreenBiz. https://www.greenbiz.com/article/ai-accelerating-loss-our-scarcest-natural-resource-water
Artificial intelligence: Big Tech’s big threat to our water and climate. (2023). The Guardian. https://www.theguardian.com/environment/2023/jan/15/artificial-intelligence-big-tech-threat-water-climate
Artificial intelligence: Examples of ethical dilemmas. (n.d.). UNESCO. https://en.unesco.org/artificial-intelligence/ethics
Co-founder of a $380 billion AI company studied literature in college — and she has ‘zero regrets’. (2023). ABC News. https://abcnews.go.com/Business/co-founder-380-billion-ai-company-studied-literature/story?id=95124474
Anthropic cofounder says studying the humanities will be ‘more important than ever’ in the age of AI. (2023). Fortune. https://fortune.com/2023/09/20/anthropic-cofounder-humanities-more-important-ai/
Delving into the dangers of DeepSeek. (n.d.). TechCrunch. https://techcrunch.com/2023/01/10/delving-into-the-dangers-of-deepseek/
Is ChatGPT plagiarism? Data shows surprising results. (2023). Copyleaks. https://copyleaks.com/ai-content-detector/chatgpt-plagiarism
Military applications of artificial intelligence. (n.d.). Wikipedia. https://en.wikipedia.org/wiki/Military_applications_of_artificial_intelligence
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15. https://proceedings.mlr.press/v81/buolamwini18a.html
Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
Mozur, P. (2019, July 8). Inside China’s dystopian dreams: A.I., shame and lots of cameras. The New York Times. https://www.nytimes.com/2018/07/08/business/china-surveillance-technology.html
Zhang, Y., et al. (2023). The prevalence of plagiarism in AI-generated texts. Journal of Artificial Intelligence Research. [Note: Verify journal details and exact URL]
Chesney, R., & Citron, D. K. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753–1819. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3213954
Vincent, J. (2020, January 23). AI-generated deepfake porn is ruining lives. What can be done? The Verge. https://www.theverge.com/2020/1/23/21075692/deepfake-porn-ai-technology-fake-videos-privacy