About Global AI Watch
Dr. Elena Vasquez

Lead Editor · AI Policy

Dr. Elena Vasquez

Former advisor to the EU AI Office (2024–2025). Previously a Senior Policy Analyst at the European Centre for AI Governance in Brussels. Holds a PhD in Technology Law from KU Leuven and an MSc in Computer Science from TU Delft.

Brussels, Belgium PhD · KU Leuven At Global AI Watch since 2025

Areas of Expertise

Sovereign AI Strategy

National compute infrastructure, data localisation, public sector AI

EU AI Act Compliance

High-risk classification, GPAI obligations, conformity assessment

Geopolitics of AI

US–EU–China technology rivalry, export controls, standards wars

AI Governance

Corporate AI governance, board-level risk, regulatory reporting

Weekly Intelligence Analysis

Published every Monday
Week of 21 April 2026

The Sovereign Stack Fractures: Three Signals from Brussels, Beijing and Washington

This week's intelligence picture is dominated by a convergent theme: every major AI power is quietly reassembling the supply chain it doesn't yet own. The EU AI Office's draft guidance on GPAI systemic risk thresholds landed Tuesday — and buried in Annex C is a clause that would classify any model with over 10^26 FLOPs training compute as automatically 'systemic'. That catches GPT-5, Gemini Ultra and Llama 4 simultaneously, but not Mistral's latest open-weights release. Intentional asymmetry, or drafting artefact? My read: intentional.

EU AI ActSovereign AIGPAIGeopolitics
Week of 14 April 2026

EuroHPC's Quiet Expansion and the Data Centre Sovereignty Paradox

EuroHPC secured €2.1 billion in supplementary commitments this week, but the real story is what those commitments don't cover: inference infrastructure. Europe has bet heavily on training-class supercomputers — LUMI, Leonardo, Jules Verne — while the actual workload shifting to AI in 2026 is overwhelmingly inference. A training cluster that sits idle between training runs is a geopolitical statement, not a business asset.

EuroHPCHardwareSovereign AIInfrastructure
Week of 7 April 2026

AI Liability Directive: Why the Commission's Retreat Matters More Than the Advance

The European Commission quietly withdrew the proposed AI Liability Directive from its legislative calendar this week — a decision that received far less coverage than it deserved. The official position is 'consolidation with the Product Liability Directive revision'. The practical effect is that AI companies operating in Europe face a two-year window with no meaningful civil liability framework beyond existing tort law.

AI LiabilityEU PolicyComplianceEnforcement

Biography

Dr. Elena Vasquez is the Lead Editor and AI Policy Analyst at Global AI Watch. Her work sits at the intersection of technology governance, geopolitics, and legal frameworks — with a focus on how states are building, regulating, and competing over AI capabilities.

Between 2024 and 2025, she advised the EU AI Office on GPAI systemic risk assessment methodology, contributing to the technical annexes of the first GPAI Code of Practice. Before joining the AI Office, she spent four years at the European Centre for AI Governance analysing member state AI strategies and benchmarking European compute infrastructure against US and Chinese investments.

Her PhD research at KU Leuven examined the legal classification of autonomous decision systems under EU administrative law, specifically the tension between algorithmic decisioning and the right to explanation under GDPR Article 22. Her dissertation was cited in the European Parliament's AI Act rapporteur report.

Elena writes the weekly Sovereign Intelligence Digest, published every Monday, which analyses the most strategically significant AI policy developments of the preceding week for executives and policymakers.