Establishing AI Regulation to Support Responsible Tech Usage

Photo: Markus Winkler on Unsplash
The skyrocketing popularity of artificial intelligence (AI) technologies has brought questions on safety and ethical matters. Issues at the intersection of AI and ethical matters will keep appearing as the technology develops, underscoring the urgency for immediate and supportive AI regulation to ensure safe and effective usage.
Ethical Implications of AI Development
Artificial intelligence has the ability to simulate tasks associated with the human mind, including learning, decision-making, and creativity. The rise of AI research is marked by the beginning of the deep learning revolution in the late 2010s. Just a decade after, we have entered an era where AI foundation models, such as GPT-4, Llama 2, and Gemini, are released every few months. People have even predicted the rise of agentic AI in the future of work, which can work, analyze, and make a decision without human intervention or any prompts at all.
With this development comes the debate and ambiguity on the ethical implications of AI usage. For instance, concerns arise over the possible privacy violation that comes from AI integration in social media platforms, where the tools collect, analyze, and utilize users’ personal data. AI tools are also able to generate increasingly realistic images or videos from social media, which also exacerbate the risk of online gender-based violence, plagiarism, frauds, and disinformation.
The Progress of AI Regulation
With clear boundaries and regulation, the usage of AI tools can help streamline our work and bring other benefits. Therefore, AI regulation plays a key role in fostering responsible development and usage of the tools.
Data from the OECD shows that 71 countries and regional organizations have made some progress in AI regulation, ranging from non-binding guidance to legislative regulation. Examples include the European Union and several states in the United States, which have taken the initiative to introduce guidelines or proposals for regulations on ethical AI.
From the business side, tech giants like IBM, Microsoft, and Google have also published guidelines for ethical AI. These guidelines emphasized several common principles, which are transparency and trust, accountability, privacy, fairness, and the creation of environmentally friendly AI.
Unfortunately, environmental matters in AI regulation are still not yet a priority. For example, the United States Artificial Intelligence Environmental Impacts Act of 2024 only regulates a voluntary reporting system for the environmental impacts of AI. Similarly, the European Union AI Act regulates the environmental impact of AI through energy-efficient programming as a voluntary code of conduct.
Responsible Tech Usage
As AI technologies become increasingly integrated into our lives, we must continuously put on a critical lens in seeing how the tools are developed and used. Establishing and expanding AI regulation to tackle the issues of safety, ethics, and environmental sustainability becomes instrumental.
Governments, businesses that use and develop AI tools, and civil society must work together in overcoming these challenges. Furthermore, international cooperation is also necessary for benchmarking global AI standards to local policies. Addressing this issue can foster a more responsible AI tools usage to boost their economic growth without inflicting harm for people and the planet.
Editor: Nazalea Kusuma & Kresentia Madina

Subscribe to Green Network Asia
Strengthen your personal and professional development with cross-sectoral insights on sustainability-related issues and sustainable development across the Asia Pacific and beyond.