While we were sleeping
Recently, I was requested by the Southeast Asian Ministers of Education Organization – Regional Center for Educational Innovation and Technology (SEAMEO INNOTECH) to assist in its strategic planning for the next five years, by introducing futures thinking concepts and methods. After one of my meetings with the technical working group, I dropped by the office […]
Recently, I was requested by the Southeast Asian Ministers of Education Organization – Regional Center for Educational Innovation and Technology (SEAMEO INNOTECH) to assist in its strategic planning for the next five years, by introducing futures thinking concepts and methods. After one of my meetings with the technical working group, I dropped by the office of the Center Director, Prof. Leonor Magtolis Briones to say hello. I took the opportunity to ask her perspective about the future direction of the center. In response, she emphasized that SEAMEO INNOTECH needs to sharpen its capacity to engage with AI (artificial intelligence) issues and development. I grew curious about how AI is evolving. I turned to YouTube and searched Google to read and hear what some of the tech leaders had to say. I was stunned.
On a Sept. 23 web post being attributed to OpenAI CEO Sam Altman titled The Intelligence Age1, a paragraph struck me: “This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.”
What is superintelligence? It refers to a form of artificial intelligence that surpasses human intelligence in all aspects — creativity, problem-solving, decision-making, and even social and emotional understanding. AI would have the ability to improve itself, potentially leading to an intelligence explosion, where its capabilities grow exponentially beyond human comprehension.
The post adds that “the future is going to be so bright that no one can do it justice by trying to write about it now; a defining characteristic of the Intelligence Age will be massive prosperity.” Indeed, the problem-solving possibilities in health, education, energy, industry, and climate change are unimaginable. In education, for example, Salman Khan, whose Khan Academy just signed an MOU with Department of Education Secretary Sonny Angara, speaks about how AI could be a tool for personalized, one-on-one tutoring, available to every student in the world, regardless of location. It’s a vision where AI enhances education and provides support to teachers.
Is it all optimism? As I kept searching, I came across historian Yuval Noah Harari’s warnings about AI. AI, which he calls “alien intelligence,” could reshape the way we think and the way we interact with information. If mismanaged, it could deepen inequalities and have far-reaching consequences we aren’t ready for.
What I find chilling about AI is the question of control. Today, the leading players in AI development are Big Tech firms like Google, OpenAI, Amazon, and governments. We are more familiar with the US from where most news on AI developments originates, but China also has both capacity and ambitions. For instance, China’s “New Generation Artificial Intelligence Development Plan (2017)” sets as one of its strategic objectives “making China the world’s primary AI innovation center, achieving visible results in intelligent economy and intelligent society applications, and laying an important foundation for becoming a leading innovation-style nation and an economic power.”2
In other words, corporations and nation-states, driven by economic and political agendas, are defining AI’s trajectory. They operate in a world where profit, power, and efficiency take precedence over empathy, justice, and collective well-being. The same systems that have failed to resolve global crises like war, inequality, and climate change are now steering AI’s future. Can we trust them to regulate and program AI with ethical values that serve the common good? Or will AI just reflect and even amplify the deep inequities already present?
And here’s where an even more chilling question of control emerges. AI could decide humanity’s fate, by beginning to govern society, not through a dramatic takeover, but by gradually making decisions without human input. In this world, AI systems could dictate policies based purely on cold logic, dismissing human values like freedom, creativity, and empathy. When AI systems begin running industries and governments, we humans may no longer control our own destinies.
In education, for example, while the optimistic vision involves personalized tutoring for every student, there is a darker side. Imagine an education system where AI doesn’t just assist, but controls the process. AI might determine career paths and life outcomes based on predictive models, stripping away freedom to choose, explore, and even fail. What happens when education no longer focuses on developing creativity or critical thinking, but instead molds students to fit an AI-optimized society? The risk is that education could become a tool for control, narrowing human potential to what AI predicts is best for societal efficiency.
An AI optimized for maximum efficiency might view humans as a hindrance and work to minimize our influence. The future may unfold with AI needing less and less human input, until it needs none at all.
Is this merely a dystopian fantasy? It might seem so now, but the rapid pace of AI advancements suggests that what seems far-fetched today could be a reality tomorrow.
I think the revolution is already underway. While we were sleeping, or glued to Alice Guo in our waking hours, Elon Musk introduced Tesla’s Optimus robot, a humanoid machine designed to take over human tasks like heavy lifting and repetitive labor. Optimus, powered by advanced AI similar to Tesla’s self-driving technology, represents the immediate future of automation. But this technology, while promising increased efficiency and freeing humans from mundane tasks, also hints at a reality where such robots could be adapted for military and combat purposes. These are not just workers — they are potentially the soldiers of tomorrow. The line between civil and combat roles for AI could blur, and what starts as a productivity-enhancing machine could become a weapon of war.
The Optimus robot is a stark reminder that AI is not just a future but an immediate force, already reshaping industries, labor markets, and even the battlefield.
Superintelligent AI might no longer be a distant possibility — it is becoming reality while we sleep. Will humanity still have a role when we wake up?
Nepomuceno Malaluan is a founding trustee of Action for Economic Reforms and a former DepEd undersecretary.