Who Owns the Cloud? Governing Compute Infrastructure in a Sovereignty-Obsessed World

Cloud platforms have quietly become a strategic resource on par with energy and finance. Yet governments are trying to tame this shared infrastructure with old tools: borders, ownership caps, and data localization rules. This long-form analysis explains why geography is a poor proxy for security, how AI workloads tighten the link between cloud policy and national power, and what a more realistic framework for trust in compute infrastructure would look like. Instead of carving the internet into splintered jurisdictions, states need to demand verifiable technical assurances and shared security baselines that match how cloud systems actually function.

The global cloud has turned into a kind of invisible utility. Hospitals rely on it for records and diagnostics. Logistics networks use it to move goods. Election agencies depend on it for voter rolls and communication. AI companies treat it as their lifeblood, a way to rent vast amounts of computing power without owning a single data center. In practice, most modern economies now rest on a tiered stack of compute, storage, and networking that sits in a handful of hyperscale clouds and a growing set of specialised AI providers.

Yet the political conversation still often treats the cloud as a simple outsourcing choice. The question is framed as whether a government or critical sector should be allowed to use a foreign provider at all, rather than how to evaluate and enforce trustworthy operation of a complex technical system. As security concerns grow, and as AI races tighten pressure on compute capacity, that gap between political instinct and technical reality is becoming more dangerous.

This essay looks at that gap from the vantage point of a think tank: not to rehearse generic slogans about sovereignty or innovation, but to spell out the structural tensions that are reshaping cloud governance. The core claim is straightforward. Location, ownership, and nationality are politically salient, and sometimes relevant, but they are weak and often misleading indicators of security. At the same time, ignoring the political pull of sovereignty is unrealistic. The task is not to return to a borderless ideal, but to design trust frameworks that give governments real levers over risk without tearing apart the compute fabric that AI and critical services now require.

To do that, we first need a clear view of how cloud infrastructure actually works, how AI workloads are changing its economics and topology, and why current sovereignty policies fall short of their own security goals.

The cloud as infrastructure, not a convenience

The term cloud still encourages an illusion of weightlessness, as if data and workloads float somewhere abstract. In practice, cloud services are dense industrial systems. Providers operate networks of data centers, cable landings, content delivery networks, and edge locations, all stitched together through carefully engineered software. They manage hardware fleets, design silicon, and maintain distributed operating systems that orchestrate millions of tasks per second.

The economic model is simple in outline. Instead of buying servers and building facilities, customers rent computing power, storage, and specialised functions. They define their workloads. The provider decides how to slice those workloads into tasks, where to run them, how to route traffic, how to replicate data, and how to recover from failures. That division of labor lets customers focus on their core activity and pushes the provider to optimise the underlying machinery for scale and resilience.

At the scale of a hyperscaler, this means permanent crisis management. Hardware fails constantly. Network links drop. Regional power issues and natural disasters happen. What looks like smooth continuity from the outside is the product of aggressive redundancy and highly automated recovery. The same is true for security. Attackers scan exposed systems continually, probe for misconfigurations, and weaponize new vulnerabilities in days rather than weeks. Cloud operators spend a large share of their effort simply staying ahead of this background noise.

This is where trust enters the picture. Customers have limited visibility into the internal workings of a hyperscale platform. They can review documentation, examine configuration options, and read audit reports, but at some point they must decide whether a provider’s controls, culture, and technical guarantees are enough for them to place sensitive workloads in that environment. Governments face the same decision, but with the added complication that their choice sets de facto standards for others and can shift markets.

If trust is treated as a simple yes or no question, anchored in nationality or geography, the answer will rarely line up with actual risk. A domestic provider can be insecure, opaque, or brittle. A foreign one can invest heavily in technical safeguards. The reality is more granular. Trust in cloud infrastructure is built from hundreds of low level design decisions, from identity management and key handling to incident reporting and hardware verification. Policy that ignores that granularity risks producing symbolic sovereignty without meaningful security.

AI as an accelerant for all the structural tensions

Artificial intelligence development has amplified both the strategic value of cloud platforms and the pressure on their design. Training large models requires concentrated clusters of accelerators, high bandwidth interconnects, and finely tuned scheduling software. Running those models in production adds another layer of orchestration between user-facing services, data stores, and inference endpoints. Very few organisations can build that stack on their own.

The result is an even tighter coupling between AI companies and cloud providers. Hyperscalers use their balance sheets and infrastructure to secure priority access to advanced chips. In return, they bind AI companies into long term agreements that route most of their workloads through a single platform. At the same time, specialised AI clouds have emerged, oriented around compute-intensive workloads rather than general enterprise applications. Many of these niche providers rely on financing and infrastructure partnerships with the same hyperscalers they ostensibly compete with.

This network of relationships matters for governance. From a national security perspective, the AI supply chain does not stop at chips, export controls, and model weights. It includes the orchestration layers that schedule jobs, the identity systems that govern access, the backup and logging infrastructure that record what models did, and the network paths that connect training clusters to user traffic. A narrow focus on hardware flows and model exfiltration misses a large part of the attack surface.

At the same time, AI intensifies demand for compute so strongly that scarcity has become a structural feature of the market. When capacity is tight, customers are more likely to cut corners. If access to a given region or provider is constrained for regulatory reasons, high value actors will look for loopholes, intermediaries, or shadow arrangements to secure the resources they need. We already see this behavior in energy markets and arms control; there is no reason to expect AI compute to be different.

For policymakers, this means that poorly calibrated sovereignty rules can produce perverse effects. If national frameworks make approved configurations too costly, rigid, or slow, well funded AI actors may be tempted to use less regulated paths, reshuffling risk rather than reducing it. A realistic governance approach has to account for this demand pressure and the intertwined nature of AI and cloud markets.

The lure and limits of geography as a proxy for trust

Faced with this complexity, states reach for tools they know: data localization laws, restrictions on foreign ownership, sector specific rules for public procurement. These measures typically have three levers.

First, geography. Rules may insist that certain data sets remain on infrastructure physically located in the country or region. In practice, this means restricting which cloud regions can be used, or requiring that data be stored and processed within a specific jurisdiction. Some states have gone further, mandating local offices and legal presence for technology companies in order to increase leverage over their operations.

Second, scope. Governments often start with public sector workloads or critical infrastructure operators. They may prohibit these entities from using foreign providers, tie procurement to national certifications, or require sector specific security audits. Because these sectors spend heavily on cloud services, their requirements shape the broader market.

Third, criteria for trust. Sovereignty regimes rarely limit themselves to location alone. They can dictate who may own a cloud provider, where support staff can sit, how encryption keys must be handled, and which standards are acceptable. Some regimes demand that encryption keys stay under the control of local entities, or that country specific algorithms be used in certain services.

Each of these levers responds to genuine concerns. States want legal jurisdiction over sensitive data, predictable access routes for law enforcement, and assurance that foreign intelligence agencies cannot quietly pull at threads in their critical systems. They want local economic activity and tax bases tied to digital infrastructure. They worry about concentration of market power in a small number of foreign firms. These are legitimate strategic anxieties.

The problem lies in treating location and ownership as reliable stand-ins for security. A workload constrained to a national region can still be vulnerable through software flaws, misconfigurations, weak identity systems, or compromised supply chains. Segmenting infrastructure by geography can make disaster recovery harder, limit capacity to reroute traffic in crises, and complicate global incident response. Restrictions on where support staff may sit can reduce access to specialised expertise and encourage informal workarounds that introduce fresh weaknesses.

There are also subtler systemic costs. The cloud works well when providers can distribute workloads across multiple availability zones and regions, and when they can draw on worldwide telemetry to detect and correlate attack patterns. If every jurisdiction demands its own quasi-autarkic configuration, the provider must duplicate infrastructure, compliance teams, and security processes. That makes services more expensive and diverts investment away from improvements that could raise security for all customers.

From a security point of view, what matters most is not where a server is, but whether there are effective controls on who can reach it, how code is deployed to it, how data is encrypted, what kinds of audit trails exist, and how quickly a compromise is detected and contained. Location can be part of that picture, particularly in situations involving physical threats or legal process, but it is rarely decisive on its own.

Alternative models of trust: continuous verification instead of static labels

Technical communities have been trying to move away from static, entity based notions of trust for some time. Zero trust architectures represent one prominent strand of this thinking. Instead of assuming that devices and users inside a perimeter are benign, zero trust models require authentication, authorisation, and context checks for each access attempt, regardless of network position. The logic is simple: compromise is inevitable, so systems should limit the blast radius of any intrusion and make lateral movement difficult.

In cloud environments, this translates into fine grained identity, strict segmentation of workloads, and constant validation of privilege. It privileges instrumentation and feedback loops over one time certification. In principle, a state could insist that any provider wishing to serve certain sectors demonstrate adherence to concrete zero trust practices, with independent verification of how identity and access management are actually implemented.

Another strand involves hardware based and cryptographic assurances. Trusted execution environments and confidential computing aim to ensure that data and code remain protected even from the operator of the underlying infrastructure. Remote attestation lets a customer verify that their workload is running on expected hardware and software, with predefined security properties, before sending sensitive material.

These approaches do not eliminate the need for organisational trust. They still depend on correct implementation, supply chain integrity, and the honest publication of attestation schemes. But they shift some of the burden from institutional attributes, such as nationality or ownership, toward observable technical properties. A state or regulator can specify which forms of attestation it recognises, which hardware generations are acceptable, and which supply chain practices are required.

So far, these methods remain unevenly deployed. Trusted execution environments and confidential computing are gaining ground, but have not yet become the default for all high value workloads. Attestation frameworks are still maturing, and regulators are only beginning to incorporate them into policy debates. Yet they offer a path toward a more nuanced notion of trust in compute infrastructure, one that aligns better with the way cloud systems operate.

Where current sovereignty regimes collide with security and AI

When sovereignty driven requirements ignore this technical landscape, they can inadvertently damage the security posture of the very systems they seek to protect. Consider a few recurring patterns.

First, fragmentation of threat visibility. A global platform can correlate signals from many regions. If a new exploit appears in one cluster of customers, it can design and deploy mitigations quickly for others. If regional silos must operate under rigid legal firewalls that severely limit cross border sharing of operational data, that pattern recognition becomes much harder. Each silo has to reinvent the wheel, and attackers can find new blind spots.

Second, brittle disaster recovery. Some of the most effective resilience strategies depend on the ability to fail over to other regions or cloud providers when an entire region goes offline due to natural disaster, infrastructure failure, or conflict. Ukraine’s shift of government data to external infrastructure before the full escalation of war, and Estonia’s data embassy in Luxembourg, are often cited examples. Hard bans on cross border backups or rigid legal interpretations that discourage such patterns can weaken continuity of government and critical services in crises.

Third, complexity of key management. Sovereignty debates frequently focus on control over encryption keys. Externalising key management to a local entity can satisfy privacy and control concerns, but it also creates new attack surfaces. Identity and key management platforms have become prime targets for advanced threat actors, because compromising them often grants wide access. Forcing customers to rely on a patchwork of local providers, each with different operational maturity, may increase systemic risk. The question should not be simply where keys reside, but whether they are managed under strong technical and organisational safeguards, and how failures are detected and remediated.

Fourth, market segmentation that leaves public sectors behind. When approved regions and configurations lag behind the state of the art by several years, public agencies and critical infrastructure operators may be confined to older services with weaker security properties. They may also face higher prices due to bespoke compliance overhead. Private firms, including some operating in sensitive sectors, may quietly move to more advanced configurations. The result is a two speed cloud market where some of the most important public missions operate on second tier infrastructure.

All of these frictions are magnified by AI. Training regimes that assume global scale and flexible placement of compute resources are hard to retrofit into strictly localised frameworks. Frontier model developers need to move massive datasets and workloads across clusters to optimise utilisation. Incident response for AI ecosystems depends not just on patching generic vulnerabilities, but on monitoring misuse, model theft, and unexpected behaviors across many deployment contexts. Fragmented governance frameworks make such coordination more difficult.

None of this means that states should surrender scrutiny to providers. It means that sovereignty strategies which focus almost exclusively on geography and nationality will underperform on their own terms. For both security and autonomy, the more promising path is to demand deeper transparency and technical assurances, then use procurement, certification, and regulation to push providers towards verifiable good practice.

What a more realistic trust framework could look like

A sensible framework for evaluating trust in compute infrastructure would have several features.

It would treat cloud services as public interest infrastructure, not simply outsourced IT. That implies a long term view, an expectation of transparency, and an understanding that failures can have systemic impact. Regulators would develop specialised technical expertise rather than relying solely on legal levers or high level risk labels.

It would define trust in terms of concrete, testable properties. That could include evidence of zero trust oriented designs, audited key management practices, clear separation of duties between operational teams, rigorous supply chain controls, and the use of hardware based attestation for high value workloads. The focus would shift from who owns the provider to how the provider actually runs its platform.

It would rely on continuous verification rather than one time certification. Providers would be expected to share security indicators and incident information on an ongoing basis, in anonymised and aggregated forms where necessary. Governments would use this information not only to secure their own workloads but also to monitor ecosystem health.

It would harmonise trust regimes across allied jurisdictions as far as possible. If like minded states each invent their own idiosyncratic requirements, global providers will either struggle to comply, retreat from smaller markets, or cut corners. Coordination can prevent a race to mutually incompatible sovereignty architectures and free up resources to invest in stronger shared safeguards.

It would be honest about tradeoffs. Total control over infrastructure location, staffing, and encryption is not compatible with the economies of scale and resilience that make cloud attractive in the first place. Some degree of cross border operation is the price of high availability and rapid security response. Likewise, demanding cutting edge confidentiality features may limit the set of providers who can compete in a market, and therefore affect pricing and innovation. Policymakers need to make these tradeoffs explicit rather than treating sovereignty as a free good.

In such a framework, geography still matters, but as one variable among many. States can reasonably require that certain workloads stay within specified regions, that lawful access mechanisms be transparent, and that domestic legal remedies exist. They can support local providers where doing so enhances resilience and competition. But they would not pretend that a national label alone protects citizens from exploits that propagate through widely used software components, or from breaches that start in mismanaged identity systems.

Implications for the United States and its partners

For the United States, which hosts the dominant hyperscale providers and operates an extensive intelligence apparatus, the legitimacy of its own legal and oversight frameworks is central to global trust in cloud services. Laws like the CLOUD Act have been interpreted abroad as granting sweeping new powers over foreign data, even though much of their content clarifies existing obligations and creates structured avenues for cross border law enforcement cooperation. That gap between legal detail and political perception matters.

If Washington wants its firms to remain central to global compute infrastructure, it will have to keep demonstrating that access to data stored in their systems is constrained, reviewable, and subject to meaningful safeguards. That means transparent reporting on national security requests, active oversight bodies, and a willingness to adjust practices in response to credible concerns from partners. It also means resisting the temptation to quietly expand offensive uses of commercial cloud infrastructure in ways that would confirm the worst suspicions of foreign regulators.

At the same time, the United States and its partners have an interest in accelerating the deployment of confidential computing and attestation technologies. If sensitive workloads can run in environments where providers themselves cannot see plaintext data, and where customers can verify the code and configuration on remote machines, some of the sharpest sovereignty concerns can be reduced. That will not entirely remove political objections, but it can change the structure of the debate.

The risk is that by the time these mechanisms mature and become widespread, many jurisdictions will already have locked in fragmentary sovereignty regimes that are difficult to reverse. Incentives to build local cloud islands, often in partnership with domestic champions, will have created new constituencies for a splintered infrastructure. That is one reason why current debates over cloud trust and AI governance are more urgent than they might appear. The technical trajectory of compute infrastructure is long, but legal and political frameworks can harden quickly.

Conclusion: trust as a technical and political craft

Trust in compute infrastructure is often discussed as if it were a feeling, a diffuse sense that certain providers or countries are safe and others are not. In practice, it is a craft. It is assembled from engineering choices, organisational culture, oversight mechanisms, and the incentives created by regulation and procurement. It can be strengthened, eroded, or misplaced.

There is nothing wrong with governments worrying about the systemic power of foreign hyperscalers, or about the intersection of commercial platforms with national intelligence activity. The mistake lies in assuming that geography or formal ownership can stand in for the deeper work of governing a shared technical substrate that no single actor fully controls.

AI development has exposed how dependent advanced economies have become on that substrate. The models that attract political attention do not run in a vacuum. They live inside data centers, orchestration layers, and identity fabrics whose stability matters for far more than chatbots and recommendation systems. When a major cloud region fails, hospitals reschedule surgeries, payment systems falter, and public services stall.

Good cloud governance will not eliminate risk, but it can reshape whose interests are reflected in the tradeoffs that providers make. If states insist on checklists of location and nationality, they will get compliance on those points and little visibility into anything else. If they invest in capacity to interrogate and verify technical assurances, and if they work with one another to align those expectations, they stand a better chance of influencing the safety and resilience of the global compute backbone.

The choice is not between sovereignty and the cloud. It is between a narrow, symbolic vision of control that fragments infrastructure without truly securing it, and a more demanding approach that treats compute as a shared strategic resource, subject to scrutiny at the level where risk actually lives: in code paths, access controls, key management, and incident response. In a world where AI systems depend so heavily on cloud infrastructure, that distinction will shape not only the security of digital systems, but the balance of power in the emerging computational order.

Related Articles

The Spatial Roots of Global Economic Uncertainty

December 7, 2025

Climate Strain and the Reordering of Global Food Stability

December 7, 2025

Climate on Paper, War in Practice: Why National Security Strategies Still Downgrade an Existential Threat?

December 7, 2025

Leave a Reply

Your email address will not be published. Required fields are marked *

Topics
Regions