Understanding the Dark Side of Artificial Intelligence
Illustration: Irhan Prabasukma.
Amidst the hype of so-called revolutionary technological innovations, in which Artificial Intelligence (AI) is hailed as the golden key to the future, one growing concern arises: are we building a society based on logic and intelligence, or are we creating an illusion that destroys common sense? In the euphoria sponsored by Big Tech like OpenAI, Google, and Microsoft, critics claim that the term “AI” has now been reduced to a marketing tool, a speculative jargon, and even a means to legitimate systemic exploitation. Nowhere is the promised technological feat that’s transparent, responsible, and brings the world closer to sustainability and justice.
Artificial Intelligence: Technology at the Expense of Humans
Since last year, I’ve been using three books as the basis of this reflection. First, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan and Sayash Kapoor. Second, The AI Con: How To Fight Big Tech’s Hype and Create the Future We Want by Emily Bender and Alex Hanna. Third, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI by Karen Hao.
Those three books are more than academic criticism—they are bright red alerts painting the rooms of researchers, editorial media, and marginalized communities red. From those references, we can surmise that the AI landscape today isn’t the pinnacle of technological evolution, but an elaborate play that sacrifices people for profit and power and hinges on the myths of super-intelligence.
Modern AI companies have turned tech into a commodity, backed by such grand narratives that are nearly irrefutable—except for those who really dig into the mind, heart, and stomach of those companies. AI companies make promises of miracles: virtual doctors that heal, robot lawyers that never misquote, and writers that churn out literature masterpieces in seconds.
However, what happens behind the scenes is different. The Artificial Intelligence they offer is not an entity that thinks, understands, or, let alone, creates. It is a giant statistical system that mimics patterns from data collected without permission, including and often from systematically scrapped human intellectual works.
As Bender and Hanna said, large language models are merely “stochastic parrots” that repeat what they hear without understanding the meaning. They don’t write; they copy. They don’t think; they predict word orders. However, the narration tech companies have built is so strong that the general public believes that these machines “know” what they’re talking about. In reality, AI is an intelligence simulation that’s so believable—awespiring, even—that it fools its users.
Systemic Exploitation
One of the most obvious forms of exploitation here is the theft of intellectual works to train AI models. Hao revealed that OpenAI and similar companies built the foundation of their businesses on illegally and unethically collected data. Literature works, journal articles, film scripts, and copyrighted books and paintings—none of them are safe from being scrapped without permission, without credits, and, of course, without compensation. Two of my favourite writers, George R. R. Martin and John Grisham, have sued AI companies for using their novels to train models without their permission.
This is not just about the law; it is also about morality. The AI training process is a form of systematic content theft—a daylight robbery of value from creators who have spent years honing their craft, exploring references, and pouring everything into their creations. Meanwhile, AI companies make billions from technology built on the backbone of other people’s hard work. Ironically, those writers and artists are now losing jobs because the market is saturated with AI-generated works. This is, quite obviously, not development but a massive destruction of creative ecosystems.
Moreover, the AI industry has created a new form of labor exploitation. Behind the “magic” of chatbots like ChatGPT are thousands of outsourced workers in developing countries, assigned to clean up data, mark dangerous content, and provide feedback to make AI models seem more human. Narayanan and Kapoor revealed that these workers—often called content moderators or data annotators—work in poor, even traumatic, conditions, with low pay and little to no psychological protection. Day by day, they’re forced to see violent content, child pornography, and hate speech—all to ensure the chatbots won’t put out similar content.
This is a form of cognitive colonialism. Rich companies in the Global North are massively profiting from “advanced technology” while workers in the Global South are paying the mental and emotional tolls. These workers are never mentioned in ads, interviewed by the media, platformed at global seminars and conferences, nor given royalties for the products. They are unseen shadows. Yet without them, Artificial Intelligence will never be able to mimic human writing and speech.
Diversion from Real Problems
The more concerning and annoying thing is how AI companies deliberately created the myths of Artificial General Intelligence (AGI) and Artificial Super-Intelligence (ASI) to divert our attention from the real problems. The terms AGI and ASI—referring to machines with equal or higher intelligence than humans—are sold as an inevitable horizon, as a revolution that will surely come and be welcomed with open arms. But, as Narayanan and Kapoor said, there’s no proof that AGI, let alone ASI, is possible. The claims of their reckoning often come from people with financial interests to maintain the hype.
OpenAI, for example, was established with a mission to create AGI for “the good of all people”. But in practice, it operates as a capitalistic company seeking profit by dominating the market. Even scientists like Geoffrey Hinton—known as the Godfather of AI—are warning us about the existential risks posed by Artificial Intelligence. Tech company executives often use speculative, dramatic language to frame AI as a futuristic threat that must be controlled by a handful of elite technocrats. Yet, the real existential threat is the diversion of attention from today’s real dangers: discrimination powered by predictive systems, mass surveillance from state and intelligence bodies, massive layoffs, and nearly unstoppable misinformation and disinformation at scale.
Predictive systems, which are the backbone of AI apps, are among the most dangerous technologies sold as solutions. Narayanan and Kapoor show that predictive AI—used to predict human behavior, such as someone’s likelihood to commit a crime or pay a debt—is, at its core, unreliable because humans are not a deterministic system. They change, learn, and react to their surroundings. So, precognition tech, like in the film Minority Report, is dangerous and vulnerable to manipulation—just like the film’s ending.
Yet corporations and governments keep using these systems to make important decisions, such as who will get a loan, be employed, or be jailed. The results? The reproduction and even reinforcement of biases, such as racial, class, and gender biases. Algorithms trained on historical data will surely reproduce historical inequalities. A system used by the US’s judicial system to predict the risk of reoffense disproportionately flags black people as high-risk cases when the reality isn’t so. This isn’t neutrality—it’s codified discrimination.
Mass media are accomplices. Narayanan and Kapoor revealed that much of the news and reporting on AI just blindly repeats PR statements from AI companies, depicting AI as humanoid robots to imply intent and agency. Phrases like “AI learning” and “machine determinism” create the illusion that this tech is alive and autonomous, when in fact it is just algorithmic execution based on human data input.
Media often shows numbers without context—like, “AI machines have reached 95% accuracy!”—and without adequate explanation that the number might just mean accuracy on a limited data set that doesn’t reflect real-life conditions. Sometimes, mass media also uses metaphors that portray AI as sacred, as though this technology were a godly entity to worship. Journalism, which should be the gatekeeper of truth, becomes a marketing funnel that glorifies tech companies.
Questioning “Advancement”
Still, the most provocative aspect of it all is how AI companies successfully position themselves as humanity’s savior while continuously developing extremely harmful products. For instance, OpenAI released a statement about the existential risk of AI; at the same time, they launched a model that can be used to create deepfakes, spread misinformation, and displace human jobs. They ask for a moratorium on large model development, but only after they themselves have built a foundation that dominates the market. This is a smart yet cunning tactic: to create a threat and position yourself as the only savior. As Hao said, Altman and his partners at OpenAI are not just building a company, they’re building an empire. They are not merely seeking economic powers; they want epistemological powers in which they are the sole determinant of truth, intelligence, and the future.
In a world powered by Artificial Intelligence, we really need to hit the brakes and ask: who defines advancement? Is advancement when a chatbot can write books and poems, or when a writer loses their livelihood because the market is flooded with fakes? Is advancement when a system can predict a crime, or when a system destroys someone’s life due to an algorithmic mistake?
Narayanan and Kapoor assert that predictive AI will likely not be successful ever because humans cannot be predicted like the weather or the stock market. Yet AI companies keep selling it because there’s a lot of money in it. They are not selling solutions—they are selling an illusion. As long as the illusion is believed by the collective, profit will be abundant.
Digital Greenwashing
An even more concerning issue is the environmental impact of the AI industry. Bender and Hanna note that Google and Microsoft, in their AI obsessions, have abandoned their climate commitments repeatedly mentioned in sustainability reports and prestigious events like the World Economic Forum. Training AI models requires vast amounts of energy—millions of kWh—and most of it comes from fossil fuels.
Data centers, the backbone of their operations, are blackholes of energy. Meanwhile, these companies insist on claiming they’re “green” by buying carbon credits. This is digital greenwashing: claiming to be environmentally friendly while corroding the planet’s future for a leg up in the AI industry. Buying carbon credits should’ve been the last step after thorough efficiency and efficacy efforts, not as penance as you generate crazy amounts of emissions.
Rethinking Artificial Intelligence
However, not all hope is lost. As Bender and Hanna say, we can fight the hype. We can hold AI companies accountable. We can refuse to use products built on exploitation. We must leave behind AI made by companies that support and perpetuate colonialism, apartheid, and the genocide in Palestine. We can support robust regulations, such as the EU’s AI Act, which classifies AI systems based on risk. We can also support alternative models, like that of Te Hiku Media in New Zealand, which is building Maori-language voice technology based on the principles of data sovereignty, consent, and mutual benefit among all involved. They are proving that AI can be built ethically—if we choose to do so.
We can also change how we talk about AI. As suggested by those writers I mentioned, we must stop using the term “AI” vaguely and start being more specific. Is it a face recognition system or a recommendation algorithm? By using the correct terms, we dissipate the mystical aura of this seemingly all-encompassing technology. We must ask critical questions: Who built this? What for? From whose data? Who profits? Who takes the loss? And, how should we respond?
Ultimately, we must recenter humanity. AI is not a goal, but a tool. And a good tool must serve the interests of humans and humanity, not replace, exploit, or even destroy them. The world as promised by AI companies—a world without work, without uncertainty—is not a world worth living in. We don’t need more stochastic parrots that mimic human creativity. We need systems that respect the work, rights, and dignity of humans.
For me, those three books are a wakeup call from a nightmare dressed like a daydream of advancement. The writers remind us that technology is not neutral and that power must not be ceded to a handful of corporations that claim to predict the future. We don’t need to wait for AGI or ASI to realize that we’re in the middle of a crisis. A crisis of honesty, ethics, and imagination—even humanity. It’s time to stop worshipping algorithms and start fighting for a just world—with or without AI.
Editor: Abul Muammar
Translator: Nazalea Kusuma
The original version of this article is published in Indonesian at Green Network Asia – Indonesia.
Join Green Network Asia Membership
If you find this content valuable, support Green Network Asia’s movement to create positive impact for people and the planet through public education and multi-stakeholder advocacy on sustainability-related issues and sustainable development. Get exclusive benefits for personal and professional development as well as for organizational capacity development.
Become a Member NowJalal
Jalal is a Senior Advisor at Green Network Asia. He is a sustainability consultant, advisor, and provocateur with over 25 years of professional experience. He has worked for several multilateral organizations and national and multinational companies as a subject matter expert, advisor, and board committee member in CSR, sustainability, and ESG. He has founded and become a principal consultant in several sustainability consultancies as well as served as a board committee member and volunteer at various social organizations that promote sustainability.

Attempting Data Center Circularity Through Waste Heat Recovery
Indigenous Knowledge and Art as Integral Instruments for Disaster Risk Reduction
Strengthening Societal Resilience in the Age of Disruptions
Building Strategic Approach to Support Urban Health for All
Understanding and Addressing Multiple Dimensions of Child Deprivation
Building Heat Resilience Amidst Rising Risk in the Asia-Pacific