Last month, China unveiled plans for a global body to oversee artificial intelligence premised on equal access to the technology, deeper cross-border collaboration, and steps to ease bottlenecks such as hardware restrictions and limits on talent exchange.
Six months earlier, 58 countries met in Paris to endorse a declaration on “inclusive and sustainable AI,” which the United States and the United Kingdom declined to sign.
The approaches differ, but the pattern is the same. The race to control AI is moving far faster than any attempt to regulate it.
Artificial intelligence is a force multiplier for economies, security systems and geopolitical leverage.
In the past decade, it has moved from tech labs into military targeting suites, government procurement, and core infrastructure. The question of whether it can be meaningfully regulated has moved from seminar rooms to cabinet tables.
When the same model can assist in passing a medical licensing exam one day and generate realistic deepfake videos the next, it becomes clear that today’s oversight tools do not match the speed, versatility, or reach of modern AI.
Existing regulatory frameworks were built for slow advances and narrowly defined applications. They were not designed for systems that can produce human-level text, analyse satellite imagery in seconds, or coordinate fleets of autonomous machines.
This gap is driving calls for new governance mechanisms that can respond to its unique scale, pace, and dual-use potential.
The international dimension of the governance debate is driven by how new technologies change the risk calculus for conflict and instability. Some advances can reduce those risks by improving verification, communication, or deterrence.
Others, especially those offering a clear battlefield edge, tend to raise military spending and intensify arms races. The conventional wisdom is that AI falls into the latter group.
The success of AI in surveillance, targeting, and autonomous systems reinforces the perception that falling behind would mean a serious strategic disadvantage.
In theory, arms-control frameworks or lighter cooperative arrangements could help all sides by setting limits that avoid destabilising competition. In practice, these deals are rare.
Coordination failures, mutual distrust, and AI’s dual-use nature make the kind of internationally shared restraint seen in past arms agreements far harder to achieve today.
Within national borders, regulation is possible for states with the institutional and technical muscle to keep up. At the global level, the forces driving AI adoption make broad, enforceable rules almost impossible.
RelatedTRT Global – Turkish defence firm Havelsan develops secure closed-loop AI system for corporate use
Domestic front
Domestically, governments hold the levers. Legislatures can pass statutes, regulators can issue rules, and agencies can demand compliance from developers operating in their jurisdiction.
This is the arena where cultural, legal and economic context can be reflected in governance. A capable state can tailor rules to protect privacy, prevent algorithmic bias, and guard critical systems without suffocating innovation.
That ability, however, is unevenly distributed. Regulation is a capacity game.
Effective oversight demands advanced technical infrastructure, regulators who understand the underlying systems, and institutions agile enough to adjust rules as technology changes.
Without these, regulation risks becoming symbolic rather than effective.
Speed is the first constraint. AI systems can change significantly in a matter of months, while lawmaking usually moves in years.
Rules locked into statute risk becoming outdated before they are applied. Countries that keep pace will be those able to use adaptive mechanisms such as rolling standards, regulatory sandboxes, and streamlined amendment processes to update oversight without legislative paralysis.
States that fix rigid compliance rules in place risk failing at meaningful regulation.
The second constraint is expertise. In many countries, regulators, judges, and civil servants lack the technical knowledge to assess model architectures, training data risks, or system vulnerabilities.
Without this capability, governments draft rules they cannot enforce.
The weakness is most acute when AI underpins critical infrastructure, healthcare, or financial systems, where oversight failures can have immediate, high-impact consequences.
Without skilled reviewers, even the best-written AI law remains little more than words on paper.
The third constraint is dependence. States that rely on foreign-owned AI platforms, cloud services, or semiconductor supply chains cannot fully enforce their own standards.
If a country’s critical AI systems are trained and hosted abroad, regulators lose the power to audit or modify them. This is not an abstract sovereignty question.
Without independent capability, national regulators operate at the mercy of external suppliers. Europe’s experience with foreign cloud dominance offers a cautionary precedent.