Artificial Intelligence, Real Consequences

Why AI is Helping the Rich and Hurting the Rest of Us

Nafeesah Nawar

“Carbon footprint” is a big word for a third grader. When he comes home from school with an assignment to write a 200-word essay on tree plantation, he quickly types the prompt into ChatGPT. In just a few seconds, he receives a neatly packaged essay—polished, under 250 words—filled with terms like carbon footprint, loss of natural habitat, decreased biodiversity, and other lofty externalities tied to deforestation. He sits quietly at his desk, eyes on the screen, and doesn’t think twice. He copies every word into his notebook and submits it the next day. His teacher, pressed for time, doesn’t question the sophistication of the language and awards him a perfect 10. But somewhere along the way, what we lost were the few moments that could have sparked the child’s own thoughts—about what trees do, how the Amazon is the planet’s lungs, and how Earth’s breath is tied to the breath in his own chest. Instead, he submitted an understanding that wasn’t his. And I believe the most artificial part of artificial intelligence is how easily it dulls the real intelligence we carry within us.

AI, as it stands today, is not environmentally sustainable. That is one of our biggest concerns. The scale at which large language models like ChatGPT are trained involves hundreds of thousands of high-performance GPUs running in parallel, consuming megawatts of power daily. Cooling data centres use absurd amounts of water, as recently reported about Google’s Iowa facility, and chip manufacturing itself involves mining rare-earth minerals under exploitative labor conditions. When we say that AI is not sustainable in the current trajectory, we are being too diplomatic. The truth is, it is disproportionately centralised and built in a way that mirrors and magnifies the extractive logic of industrial capitalisation. 

But while we focus on the negative externalities that can be measured—like emissions or environmental harm—we often overlook that the full social cost of artificial intelligence goes far beyond carbon output. It begins with the cognitive loss we witnessed in the third grader. We must come to terms with the depth and seriousness of this issue, because as the prefrontal cortex of young minds is still developing—shaping their ability to make decisions, think critically, and navigate the complex realities of the world—these children are missing out on genuine cognitive engagement. And why? Because it’s far too easy to type a prompt into ChatGPT. While many argue that the process is no different from copying a result off Google, the comparison falls short. Google requires you to search, compare, and extract—activities that spark curiosity. ChatGPT, on the other hand, does the thinking for you. It delivers a clean, coherent answer in the voice you asked for, especially when prompted to “write it like I’m in third grade” or “make it sound more human.” The labour of learning is quietly bypassed.

Many people uphold AI as a symbol of efficiency. But efficiency becomes a myth when defined only by speed, not by depth. AI is not a model built to think critically like the human brain. When asked whether certain political figures qualify as authoritarian or dictatorial, AI does not offer a straightforward answer—it remains neutral. This is because AI, at its core, is a language model designed not to reason but to synthesise. AI is not intelligent in the way human beings are. It cannot feel, it cannot will, and it cannot weigh value unless mathematically instructed to fulfill an objective. It cannot make assumptions or reason through uncertainty on its own. What it does is synthesise information.

If AI relies on mainstream consensus and scholarly sources, is it truly intelligent or just a glorified information recycler? And is it ever justified for AI to replace human creativity when it cannot match the complexity or imaginative depth of the human mind? This is precisely where ethics, policy, and power dynamics collide. The justification for AI replacing jobs is often framed through the lens of economic utility: it’s cheaper, faster, and scalable. But this framing overlooks the nuanced human skills—ethical judgment, cultural awareness, and socio-emotional intelligence—that AI cannot replicate. While it may perform tasks with speed and surface-level efficiency, it cannot create meaning the way a human does.

So when corporations replace human workers with AI, they may gain short-term profits—but often at the expense of long-term social cohesion, creative diversity, and human dignity. Already, AI has displaced copywriters, translators, call centre agents, junior coders, and even visual artists. When tech firms adopt AI systems and quietly replace copywriters, the public narrative is one of efficiency and innovation. The top tech elites—those with capital, data, and server farms—profit disproportionately. Those at the bottom—the freelancers whose rates are undercut, the content moderators who suffer trauma filtering AI outputs—become collateral damage. 

AI is best suited for the very tasks that offer the first rung on the economic ladder: entry-level content creation, administrative support, technical drafting, retail service chats, and other low-skill jobs that are, in fact, vital to marginalised economies. These roles are often the only access points for people without elite degrees, without capital, or from underserved regions. When AI takes over these jobs, what does it give back? Does it create new opportunities with equal dignity and accessibility? The honest answer is no.

The emerging jobs in AI—prompt engineering, model alignment, and system auditing—are gated by high educational barriers and mostly occupied by the already privileged with technical fluency and access. In practice, this widens the gap. It hollows out the middle of the workforce and replaces it with unpaid digital labor. AI cannot perform the high-trust, high-stakes work that requires empathy, physical presence, extended reasoning, or moral courage, like performing a surgery, negotiating peace, parenting a child, or leading a nation. So what is it really doing? It is replacing the jobs that once offered dignity to the vulnerable, without being able to elevate the ones that demand true intelligence. That is not innovation. That is economic disfigurement masked in progress.

AI systems come with real environmental and social costs. Their emissions contribute directly to climate degradation—effects that disproportionately impact people in low- and middle-income countries. When the water tables run dry because a server system nearby needed millions of gallons to stay cool, it’s not a tech investor in Silicon Valley who pays for it—it’s a farmer in a semi-arid village whose crop yield has now fallen short.

Finally, coming down to AI—does it offer any benefit? Even amidst the vast externalities, social costs, and redundancies it creates—costs that often outweigh its positives—artificial intelligence can still serve a good. It can help someone with a disability communicate, support under-resourced teachers by generating supplementary material, or guide isolated students when no one else is available. It can automate tedious tasks, freeing people to focus on more meaningful work—if the system is designed more equitably. But that’s not how most AI is used today. It is deployed to cut costs for tech firms, to expand advertising reach, and to replace rather than assist. Its benefit lies in potential, but in practice, that potential is hijacked by corporate interests. The core of the issue isn’t simply whether AI is efficient—it’s whether it is just. Does it care? And if not, why do we continue to empower it?

The irony is brutal: the very populations that cannot afford access to cutting-edge AI are the ones most economically displaced by it. Even in the name of efficiency, AI intensifies inequality—benefiting those with access to capital and infrastructure while deepening instability and exclusion at the base.

Share This Article
Leave a comment