The most interesting and consequential news to come out of Rishi Sunak’s two-day AI Safety Summit may be that Yoshua Bengio has been appointed chief risk assessor. Bengio, a Canadian deep learning pioneer, will lead a team of experts in what amounts to dimensionalizing an unknown threat. Not entirely unknown, as Bengio’s work includes warnings on democracy’s vulnerability to AI, something we are watching play out in real-time in Gaza as I type. But shapeless and full of fictional shadows. The greatest threat of AI is most likely to be in our ability to use it against each other, and not AI against humanity. But it’s so early that anything, and I mean literally anything, could happen. And the greatest risks we face related to AI may not even be about the technology.
Navigating the Unknowns: AI’s Threat to Democracy and Societal Structures
It’s a perfect storm if you’re a person in power. People with power tend to need to feel in control. This is the ultimate destabilizer: a hugely powerful new technology of unknown, currently uncontrolled power that could possibly one day be coordinated and autonomous, being advanced today by a group of ad hoc, uncoordinated actors whose primary motivations are self-interest.
Bengio’s appointment reflects the nature and level of concern the international community is experiencing over AI and these circumstances. We’re deep into frontier AI now. Now that most of the inches of this planet have been mapped, a vastly different wild of our own making looms.
And here’s the thing. Over the past ten days, we’ve seen strong and revealing AI policy positions either defining or endorsing an approach to AI from most of the industrialized world. The concern is real and legitimate and much of it stems from the resignation of Dr Geoffrey Hinton from Google with his words of warning about where we might be headed.
The Power Dynamics of AI: Control, Autonomy, and Self-Interest
But Bengio knows as well as anyone that the genie is out of the box. Not that it was ever really in the box. No one is going to back away from this tech now if they have the power to influence it and any reasonable shot at controlling or leveraging it. All the national policies and provisions of accountability mean nothing when everyone in this game on a practical level is still jockeying for an advantage. He who controls AI controls the future. No set of policies and best practices is going to dampen the excitement of this opportunity, this once-in-a-species opportunity. Not while wars are being waged and billions are being invested.
We can try to map the borders of our risk and feel like we’ve contained something.
Unleashing the Genie: The Inevitability and Irreversibility of AI Advancement
But contain it we have not, when we can still barely define it. And tomorrow’s definition might be different from today’s. A year ago, did any of us envision the dazzling, explosive arrival of ChatGPT? This is the paradoxical nature of AI advancement:
1. Control vs. Uncertainty: The drive by those in power to control and utilize AI clashes with the inherent unpredictability and vast potential of the technology. This creates a “perfect storm” scenario where the push for advancement and utilization of AI might outpace our understanding and ability to mitigate its risks effectively.
2. Geopolitical and Corporate Dynamics: AI’s development is dominated by a mix of state and non-state actors, often with conflicting interests. This dispersed, competitive advancement landscape makes cohesive, global regulation challenging. The assertion that “he who controls AI controls the future” speaks to the high stakes involved, making international cooperation both crucial and complicated.
3. The Challenge of Defining and Containing Risks: The risks of AI are difficult to fully define and understand at this stage, making them harder to regulate and contain. This is a moving target, with technological advancements continuously reshaping the landscape.
4. Genie Out of the Box: The metaphor of the “genie out of the box” highlights the irreversible advancement of AI technology. Even with awareness of potential risks and calls for careful oversight, the momentum behind AI’s development is unlikely to wane given its immense potential and the investments at stake.
5. Policy vs. Practice: The dichotomy between national policies on AI and the practical actions of stakeholders in the AI arena is critical. Policies might set frameworks and express intent, but real-world actions driven by competitive advantage, economic gain, and strategic control often diverge from these guidelines.
The Societal Impact of AI: Job Disruption on the Horizon
The most urgent and biggest risks may not be technical, but societal. We are 6-24 months away from the disappearance of millions of customer service + support jobs. 3 mm people work in contact centres in the US alone. Millions more in other service and support environments globally.
By the end of 2024, AI will be able to manage not just first-level interactions, but second-level, ongoing support scenarios, and resolutions. Not all, but many. Employers are already planning for the transition to automation. They don’t have to screen resumes. They don’t have to make any offers. They have to click a few buttons.
The need for these workers will disappear almost instantaneously. How the transition for this and other industries takes place is a complete unknown, but not many more than 10% of those jobs will be replaced. It’s an entry-level apocalypse.
What happens to a workforce when you only need managers? What happens when most professional career paths suddenly have very unclear starting points? A greater sense of urgency is needed in these conversations, and beyond the technological risks, an understanding of the potential for societal upheaval may be the greatest risk of all.