Blogs / AI Governance: Who Writes the Rules of the Game?

AI Governance: Who Writes the Rules of the Game?

حاکمیت هوش مصنوعی: چه کسی قوانین بازی را می‌نویسد؟

Introduction

Imagine a game where the rules were written by one of the players — not a neutral referee. In this game, that player not only knows the rules but designed them to create an advantage. The other players? They are forced to play by these same rules, even if they had no role in writing them.
This is precisely what is happening today in the world of AI governance.
When it comes to artificial intelligence, most discussions focus on models, data, and hardware. But there is a deeper layer that receives far less attention: Who defines what AI is "safe"? Who determines which applications are permitted and which are not? Who writes the standards that everyone must follow?
These questions seem simple, but their answers determine the fate of nations. The country that writes the rules of the game gains the greatest strategic advantage without firing a single shot.
In the article on the AI economy we saw that models are strategic national assets. In the data war we learned that data is the new oil of this era. Now it's time for the third layer of power: control of norms.

Why Are Rules More Important Than Models?

At first glance this may seem strange. Isn't having GPT-5 more important than writing a law? But history shows otherwise.
Consider the internet. The core protocols of the internet — TCP/IP, HTTP, DNS — were designed by American engineers. These protocols appeared "neutral," but their architecture gave American companies a structural advantage. Today, 7 of the world's top 10 internet companies are American — not by chance, but because they designed the game.
Financial standards tell the same story. SWIFT, the dollar as the global reserve currency, and FATF rules were all written by the West. When America wants to punish a country, it uses these standards as a weapon — without needing an army.
Now this same pattern is repeating itself in artificial intelligence. The country that writes the safety, ethics, and transparency standards for AI not only shapes the market — it can exclude its rivals from it.

Three Powers, Three Different Approaches

The three main players — the United States, China, and Europe — have completely different approaches to AI governance and each is working to establish its model as the global standard:
Criterion 🇺🇸 USA 🇨🇳 China 🇪🇺 Europe
Overall approach Innovation first, regulation later State-controlled Rights-based, precautionary
Main priority Technological superiority National security and stability Citizens' rights
Key document NIST AI RMF guidelines Generative AI content regulations EU AI Act
Main concern Falling behind China Political instability Loss of privacy
Effect on competition Encourages free competition Protects domestic, limits foreign Heavy compliance burden
Model exports Chip sanctions against rivals Exports to aligned countries Standards export (Brussels Effect)

The Brussels Effect: When Regulation Becomes a Weapon

Europe has a unique strategy known as the "Brussels Effect." Its logic is simple but powerful: if you want access to the market of 450 million wealthy European consumers, you must follow our rules.
This strategy was previously tested with GDPR (General Data Protection Regulation) and it worked. American companies like Meta, Google, and Amazon were forced to redesign their privacy systems for GDPR compliance — not only in Europe but globally, because managing two separate systems was more expensive than complying with the strictest standard.
Now the EU AI Act — the world's first comprehensive artificial intelligence law — is following the same path. This law categorizes AI systems by risk level:
  • Unacceptable risk: Completely banned — such as social scoring systems or subliminal manipulation
  • High risk: Requires rigorous evaluation — such as AI in hiring, credit scoring, or medical decisions
  • Limited risk: Transparency obligations — such as chatbots that must disclose they are AI
  • Low risk: Unrestricted — such as spam filters or video games
Every company in the world that wants to operate in Europe — whether American, Chinese, or Indian — must comply with these regulations. Without having the most powerful models, Europe plays a key role in shaping global AI through market power and regulation.

The American Approach: Innovation vs. Control

America has historically taken a different stance: let innovation happen first, regulate if necessary later. This was the same approach that allowed Google, Facebook, and Amazon to grow without restriction.
But this approach is being challenged when it comes to AI. Two opposing forces are clashing in Washington:
Side one — Industry: Big tech companies want minimum regulation. They argue that heavy regulation slows innovation and puts America behind in competition with China.
Side two — Security concerns: Defense and intelligence agencies worry that unmonitored AI can fall into bad actors' hands or create uncontrollable crises. They want more oversight.
In practice, America has so far chosen a mix of both: voluntary oversight for domestic companies, and hardware export controls to prevent China from accessing advanced chips.
These custom AI chips — especially NVIDIA's advanced GPUs — have become instruments of foreign policy. By controlling chip exports, America is effectively controlling which countries can build advanced models.

The Chinese Approach: Control With Two Goals Simultaneously

China has entered the AI governance game from a peculiar angle: it wants to both control AI and lead in it.
China's regulations on generative AI — which were implemented before the EU AI Act — have a unique requirement: all AI-generated content must be consistent with "core socialist values." This literally means a Chinese AI model cannot give answers that contradict the official state narrative.
From a technical perspective, this is an ideological fine-tuning — models are trained to avoid certain topics or describe them in specific ways. This approach shows that AI governance can also serve as a tool of domestic control, not just external competition.

The Standards Battle: ISO, IEEE, and the Hidden War

Beyond government laws, a less visible competition is underway in international standardization organizations.
Bodies like ISO (International Organization for Standardization), IEEE, and ITU are writing technical AI standards — standards that define how a model should be trained, how it should be evaluated, and what criteria are required for it to be considered "trustworthy."
China has significantly increased its presence in these organizations in recent years. In ISO committees related to AI, Chinese representatives are among the most active groups. The goal? Ensuring that global standards are written in a way that Chinese models easily pass through them.
This is exactly the "write the rules, win the game" strategy.

Who Actually Influences the World?

There is an important reality that is often overlooked: a significant portion of AI governance is exercised not by governments but by private companies.
When OpenAI decides ChatGPT won't discuss topic X, when Meta determines which content the news feed algorithm amplifies, when Google chooses which results appear first — these are governance decisions affecting the lives of billions of people, without any legislature or parliament having approved them.
Ethics in artificial intelligence asks precisely this question: Is it right that decisions of this scale of social impact are made by the boards of private companies?
Anthropic, by building Claude, has chosen a different approach — publishing a public Acceptable Use Policy and striving for transparency in content decisions. Google DeepMind has an internal safety team. Meta has repeatedly changed its policies under public pressure.
But these remain voluntary decisions by private companies — not binding laws. And this governance gap is one of the greatest challenges of our era.

The Global Coordination Problem: Why Is There No "AI United Nations"?

Perhaps the simplest solution would be an international organization — similar to the International Atomic Energy Agency (IAEA) for nuclear weapons — to write global AI rules. But several fundamental obstacles exist:
Obstacle one — Speed: International laws take decades to reach agreement. AI changes within months. By the time agreement is reached, the technology has moved several generations forward.
Obstacle two — Conflict of interest: The US and China — which have the most advanced models — have no incentive to accept rules that limit their power. This is similar to expecting nuclear powers to voluntarily surrender their arsenals.
Obstacle three — Problem definition: Even on the definition of what "dangerous AI" means, no global consensus exists. Is Artificial General Intelligence (AGI) dangerous? Should military applications be banned? Every country has a different answer.
Nevertheless, some multilateral efforts are taking shape. The Bletchley AI Safety Summit (2023) was the first serious international gathering on this topic, attended by both the US and Europe and — notably — China as well. This signals that even rivals know some AI risks cross national borders.

Small Countries: Victims or Players?

In this equation, what role do most of the world's countries play? Unfortunately, at present, largely the role of victims.
When the EU passes the AI Act, Iranian, Turkish, or Brazilian companies wanting access to the European market are forced to comply — without having had a single representative in the decision-making process. When America restricts chip exports, countries with no domestic semiconductor industry are placed under pressure.
But a strategy exists for smaller countries: regional cooperation to create a common voice.
As mentioned in the AI economy article, the UAE and Saudi Arabia are investing in Arabic-language models. A next step could be forming a regional "AI governance bloc" — giving Arab or Muslim-majority countries a unified voice in international negotiations.
This is exactly what the European Union did — 27 countries with different policies, one unified voice against the tech giants.

Explainable AI: Governance From Within

Another dimension of AI governance with growing importance is transparency within the models themselves. When an AI model decides a loan application is rejected, classifies a defendant as dangerous, or screens out a resume — should it explain why?
Explainable AI places this question at the heart of governance: without transparency, oversight is impossible. And without oversight, accountability does not exist.
The EU AI Act includes "explainability" as one of its requirements for high-risk systems. This is an example of governance applied not from outside but from within the architecture of systems themselves.

The Future of Governance: Where Are We Heading?

Several plausible scenarios exist for the future of AI governance:
Fragmentation scenario: The world divides into several "zones" — an American zone with its own rules, a Chinese zone, a European zone. Companies must build different versions of their models for each zone. This is likely the most probable near-term scenario.
Hegemony scenario: One power — likely America or Europe — successfully makes its standards the global standard, just as the dollar became the world's reserve currency.
Cooperation scenario: A major crisis — for example, a self-improving AI model that escapes control — forces the great powers to cooperate, similar to how the nuclear threat eventually led to non-proliferation treaties.
Reality will likely be a combination of all three — fragmentation in the short term, with gradual pressures toward some common standards.

Conclusion: A Power More Hidden Than Models

If there is one lesson to draw from this analysis, it is this: in the AI race, the country that writes the rules is more powerful than the country that builds the models.
Europe doesn't have the best AI models — but it is shaping how AI develops across the entire world. This invisible yet profound influence is what economists call "soft power."
For countries that have neither the most powerful models nor the largest markets, the only realistic path is: regional cooperation to create a common voice, investment in native standards, and active presence in international organizations.
The game is not yet over. Its rules are not yet fully written. And that — despite appearances — is actually an opportunity.