Diplo https://www.diplomacy.edu Towards Inclusive - Informed - Innovative Diplomacy Tue, 02 Dec 2025 10:44:24 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.3 https://diplo-media.s3.eu-central-1.amazonaws.com/2021/09/j77Afbf5-favicon-diplo.ico Diplo https://www.diplomacy.edu 32 32 Is the AI bubble about to burst? Five causes and five scenarios https://www.diplomacy.edu/blog/is-the-ai-bubble-about-to-burst-five-causes-and-five-scenarios/ https://www.diplomacy.edu/blog/is-the-ai-bubble-about-to-burst-five-causes-and-five-scenarios/#respond Mon, 01 Dec 2025 22:03:07 +0000 https://www.diplomacy.edu/?post_type=blog&p=312033 State of AI in November by the author: Geopolitics, Diplomacy, and Governance

Will the AI bubble burst? Is AI now ‘too big to fail’? Will the U.S. government bail out AI giants – and what would that mean for the global economy?

These questions are now everywhere in the media, boardrooms, and policy circles. Corporate AI investment hit around $252 billion in 2024 – more than 13× higher than a decade ago – while the global AI market is projected to jump from about $189 billion in 2023 to nearly $4.8 trillion by 2033. 

 Chart, Bar Chart

Source: Stanford HAI+1 

At the same time, many companies struggle to turn pilots into profit. The gap is widening between, on the one hand, AI hype, trillion-dollar valuations, and massive infrastructural spending, and on the other hand, the slower business and societal reality of AI adoption.

This post examines five primary causes of a potential AI bubble and explores five scenarios for how the world might prevent – or cope with – a potential burst. It then sketches the likely role of five key players.


Five causes of the AI bubble

The frenzy of AI investment did not happen in a vacuum. Several forces have contributed to our tendency toward overvaluation and unrealistic expectations.

1st cause: The media hype machine

AI has been framed as the inevitable future of humanity – a story told in equal parts fear and awe. This narrative has created a powerful Fear of Missing Out (FOMO), prompting companies and governments to invest heavily in AI, often without a sober reality check. The result:

  • Corporate AI investment has reached record levels, with more than $250 billion invested in 2024 alone.
  • Big Tech is raising close to $100 billion in new debt for AI and cloud infrastructure, with data-centre spending expected to reach roughly $400 billion in a single year.
  • AI chipmaker Nvidia briefly hit a $5 trillion market capitalisation in 2025 – more than 12× its value at the launch of ChatGPT – and still dominates the AI hardware landscape.

Market capitalisation of AI-exposed companies is now measured in multi-trillion-dollar chunks. Nvidia alone has been valued in the $4–5 trillion range, comparable to the combined annual GDP of many regions.  Hype has often run ahead of business logic: investments are driven less by clear use cases and more by the fear of being left behind.

2nd cause: Diminishing returns on computing power and data 

The dominant, simple formula of the past few years has been:

More compute (read: more Nvidia GPUs) + more data = better AI.

This belief has led to massive AI factories: hyper-scale data centres and an alarming electricity and water footprint. Yet we are already hitting diminishing returns. The exponential gains of early deep learning have flattened. 

Simply stacking more GPUs now yields incremental improvements, while costs rise super-linearly and energy systems strain to keep up. The AI paradigm is quietly flipping: practical applications, domain knowledge, and human-in-the-loop workflows are becoming more important than raw compute at the base.

3rd cause: LLMs’ logical and conceptual limits

Large language models (LLMs) are encountering structural limitations that cannot be resolved simply by scaling data and compute. Despite the dominant narrative of imminent superintelligence, many leading researchers are sceptical that today’s LLMs can simply be ‘grown’ into human-level Artificial General Intelligence (AGI). Meta’s former chief AI scientist, Yann LeCun, put it this way:

On the highway toward human-level AI, a large language model is basically an off-ramp — a distraction, a dead end.

LLMs are excellent pattern recognisers, but they still:

  • hallucinate facts;
  • struggle with robust long-horizon planning;
  • lack a grounded understanding of the physical world.

Future neural architectures will certainly improve reasoning, but there is currently no credible path to human-like general intelligence that consists simply of ‘more of the same LLM, but bigger.’

4th cause: Slow AI transformation

Most AI investments are still based on potential, not on realised, measurable value. The technology is advancing faster than society’s ability to absorb it. Organisations need time to:

  • redesign workflows and business processes;
  • change procurement and risk models;
  • retrain workers and managers;
  • adapt regulation, liability, and governance frameworks.

Emerging evidence is sobering:

  • BCG finds that around 74% of companies struggle to achieve and scale value from AI initiatives. 
  • An MIT-linked study reported that roughly 95% of generative AI pilots fail to generate a meaningful business impact beyond experimentation.

The time lag between technological capability and institutional change is a core risk factor for an AI bubble. There is no shortcut: social and organisational transformation progresses on multi-year timescales, regardless of how fast GPUs are shipped.

History reminds us what happens when hype outruns reality. Previous AI winters in the 1970s and late 1980s followed periods of over-promising and under-delivering, leading to sharp cuts in funding and industrial collapse.

5th cause: Massive cost discrepancies

The latest wave of open-source models has exposed a startling cost gap. Chinese developer DeepSeek reports that:

  • DeepSeek-V3 costs around $5.5 million to train on 14.8 trillion tokens. 
  • Its more recent inference-focused R1 model reportedly cost $294,000 to train – yet rivals top Western systems on many reasoning benchmarks.

By contrast:

  • Analysts estimate that training frontier models like GPT-4.5 or GPT-5-class systems can cost on the order of $500 million per training run.

This 100-to-1 cost ratio raises brutal questions about the efficiency and necessity of current proprietary AI spending. If open-source models at a few million dollars can match or beat models costing hundreds of millions, what exactly are investors paying for?

It is in this context that low-cost, open-weight models, especially from China, are reshaping the competitive landscape and challenging the case for permanent mega-spending on closed systems.


Five possible scenarios: what happens next?

AI is unlikely to deliver on its grandest promises – at least not on the timelines and business models currently advertised. More plausibly, it will continue to make marginal but cumulative improvements, while expectations and valuations adjust. Here are five plausible scenarios for resolving the gap between hype and reality. In practice, the future will be some mix of these.

1st Scenario: The rational pivot (the textbook solution)

The classic economics textbook response would be to revisit the false premise that “more compute automatically means better AI” and instead focus on:

  • smarter architectures;
  • closer integration with human knowledge and institutions;
  • smaller, specialised, and open models.

In this scenario, AI development pivots toward systems that are:

  • grounded in curated knowledge bases and domain ontologies;
  • tightly integrated with organisational workflows (e.g. legal, healthcare, diplomacy);
  • built as open-source or open-weight models to enable scrutiny and reuse.

This shift is already visible in policy. The U.S. government’s recent AI Action Plan explicitly encourages open-source and open-weight AI as a strategic asset, framing ‘leading open models founded on American value’ as a geostrategic priority. 

However, a rational pivot comes with headwinds:

  • entrenched business models built on closed IP;
  • corporate dependence on proprietary data and trade secrets;
  • conflicts over how to reward creators of human-generated knowledge.

A serious move toward open, knowledge-centric AI would immediately raise profound questions about intellectual property law, data sharing, and the ownership of the ‘raw material’ used to train these systems.

2nd Scenario: “Too big to fail” (the 2008 bailout playbook)

A different path is to treat AI not just as a sector but as critical economic infrastructure. The narrative is already forming:

  • Alphabet CEO Sundar Pichai has warned that there are clear ‘elements of irrationality’ in current AI investment, and that no company – including Google – would be immune if an AI bubble burst.
  • Nvidia CEO Jensen Huang has joked internally that if Nvidia had delivered a visibly weak quarter, ‘the whole world would’ve fallen apart,’ capturing the sense that markets have come to see Nvidia as holding the system together.

If AI is framed as a pillar of national competitiveness and financial stability, then AI giants become ‘too big to fail’. In other words:

If things go wrong, taxpayers should pick up the bill.

In this scenario, large AI companies would receive explicit or implicit backstops – such as cheap credit, regulatory forbearance, or public–private infrastructure deals – justified as necessary to avoid broader economic disruption.

3rd Scenario: Geopolitical justification (China ante portas)

Geopolitics can easily become the master narrative that justifies almost any AI expenditure. Competition with China is already used to argue for:

  • massive public funding of AI infrastructure;
  • strategic partnerships between government, cloud providers, and foundation-model companies;
  • preferential treatment for ‘national champions’.

The U.S. now frames open models and open-weight AI as tools of geopolitical influence, explicitly linking them to American values and global standards. 

At the same time, China is demonstrating that low-cost, open-source LLMs, such as DeepSeek R1, can rival Western frontier models, sparking talk of a ‘Sputnik moment’ for AI.

In this framing, a bailout of AI giants is rebranded as:

an investment in national security and technological sovereignty.

Risk is shifted from private investors to the public, justified not by economic considerations but by geopolitical factors.

4th Scenario: AI monopolisation (the Wall Street gambit)

As smaller players run out of funding or fail to monetise, AI capacities could be aggressively consolidated into a handful of tech giants. This would mirror earlier waves of monopolisation in:

  • social media (Meta);
  • search and online ads (Google);
  • operating systems and productivity (Microsoft);
  • e-commerce and cloud (Amazon).

Meanwhile, Nvidia already controls roughly 80–90% of the AI data-centre GPU market and over 90% of the training accelerator segment, making it the de facto hardware monopoly underpinning the entire stack.

In this scenario:

  • Thousands of AI start-ups and incubators shrink to a small layer of niche providers.
  • A few proprietary platforms dominate: Google, Microsoft, Meta, Amazon, Apple, plus Nvidia at the hardware layer.
  • These firms add an AI layer to existing products – Office, Windows, Android, iOS, search, social networks, retail – making it easier for businesses and users to adopt AI inside closed ecosystems.

The main risk is a new wave of digital monopolisation. Power would shift even more decisively from control over data to control over knowledge and models that sit atop that data.

Open-source AI is the main counterforce. Low-cost, bottom-up development makes complete consolidation difficult, but not impossible: large firms can still dominate by owning the distribution channels, cloud platforms, and hardware.

5th Scenario: AI winter and new digital toys 

The digital society appears to require a permanent “frontier technology” to focus attention and capital. Gartner’s hype-cycle metaphor captures this: technologies surge from a “peak of inflated expectations” to a ‘trough of disillusionment’ before stabilising – often over 5–10 years.

We have seen this before:

  • Expert systems and LISP machines in the 1980s;
  • The “AI winters” of the 1970s and 1990s;
  • More recently, blockchain, ICOs, NFTs, and the Metaverse.

AI has already lasted longer at the top of the hype cycle than many of those digital “toys”. In this scenario, we would see:

  • a mild AI winter: slower investment, consolidation, less media attention;
  • capital rotating toward the next frontier – likely quantum computing, digital twins, or immersive virtual and mixed reality;
  • AI remains an important infrastructure, but no longer the primary object of speculative enthusiasm.

The global frontier-tech market will remain enormous, but AI will share the spotlight with new innovations and narratives.


Main actors: Strengths and weaknesses

OpenAI

Strengths: ChatGPT is still the most recognisable AI brand globally. OpenAI claims hundreds of millions of weekly active users; external estimates put that figure in the 700–800 million range in 2025, with daily prompt volumes in the billions.

OpenAI also enjoys:

  • tight integration with Microsoft’s ecosystem (Azure, Windows, Office);
  • a strong developer platform (API, fine-tuning, marketplace);
  • a first-mover advantage in consumer awareness.

Weaknesses: OpenAI is highly associated with the most aggressive AI hype – including its CEO’s focus on AGI and existential risks. It is structurally dependent on:

  • expensive frontier-model training and inference;
  • Nvidia’s hardware roadmap;
  • continued willingness of investors and partners to fund the infrastructure build-out.

OpenAI’s revenue remains overwhelmingly tied to AI services (ChatGPT subscriptions, API usage). OpenAI is unlikely to generate revenue by 2030 and still requires an additional $207 billion to fund its growth plans, according to HSBC estimates.

In a bubble-burst scenario, OpenAI is a prime candidate to be restructured rather than destroyed:

  • It could survive by leaning heavily into a geopolitical narrative, positioning itself as a national-security asset and securing U.S. government and defence-related funding.
  • More likely, it would be pulled fully into the Microsoft family, becoming a deeply integrated (and more tightly controlled) division rather than a semi-independent partner.

Google (Alphabet)

Strengths: Alphabet has the most vertically integrated AI lifecycle:

  • In-house chips (TPUs) are increasingly sold externally;
  • frontier models (Gemini series, other foundation models);
  • a global distribution network (Search, YouTube, Android, Chrome, Gmail, Maps, Workspace, Cloud).

Its market capitalisation is racing toward $4 trillion on the back of AI optimism, with Gemini 3 seen as a credible rival to OpenAI’s and Anthropic’s top models. (Source: Reuters)

Weaknesses:

  • The company’s caution culture and regulatory scrutiny can slow product launches.
  • AI that truly answers questions may cannibalise the very search-ad business that still funds much of Alphabet.
  • Heavy reliance on cloud and advertising makes it exposed to any macroeconomic slowdown.

Yet among the big players, Google may be best positioned to weather a bubble burst because AI is layered across an already profitable, diversified portfolio rather than being the core business itself.

Meta

Strengths: Meta has pursued an aggressive open-source strategy with its Llama family of models. Llama has become the default base for thousands of open-source projects, start-ups, and enterprise deployments. Meta also controls:

  • three of the world’s biggest social platforms (Facebook, Instagram, WhatsApp);
  • a sophisticated advertising and recommendation stack;
  • Reality Labs and VR/AR efforts that could integrate deeply with generative models.

This mix enables Meta to ship AI features at scale – from AI assistants in messaging apps to generative tools in Instagram – while utilising open weights to shape the ecosystem in its favour.

Weaknesses:

  • Business remains overwhelmingly ad-driven, making it vulnerable to macroeconomic shocks and regulatory limitations on tracking and targeting.
  • Its Metaverse investments remain controversial and capital-intensive.
  • Regulators in the EU and elsewhere closely monitor Meta for compliance with privacy, competition, and content governance standards.

Meta is less dependent on selling AI as a product and more focused on using AI to deepen engagement and ad performance, which may make it more resilient in a correction.

Microsoft

Strengths: Microsoft made the earliest and boldest bet on OpenAI, embedding its models across Windows, Office (via Copilot), GitHub (Copilot for developers), and Azure cloud services. This gives Microsoft:

  • unmatched enterprise distribution for AI tools;
  • leverage to push AI into government, defence, and regulated industries;
  • a leading position in AI-heavy cloud workloads.

Together with other giants, Microsoft is part of a club expected to invest hundreds of billions of dollars in data centres and AI infrastructure over the next few years.

Weaknesses:

  • Heavy dependence on OpenAI’s technical roadmap and on Nvidia’s chips.
  • Enormous capital expenditure that may be difficult to justify if AI monetisation stalls.
  • Growing antitrust risks as AI is woven into already dominant products.

In a mild bubble burst, Microsoft is more likely to reshuffle its partnerships than to retreat from AI: OpenAI could be integrated more closely, while Microsoft simultaneously accelerates its own in-house models.

Nvidia

Strengths: Nvidia has become the picks-and-shovels provider of the AI gold rush:

  • controlling around 80–90% of the AI chip and data-centre GPU market;
  • reaching a $5 trillion valuation at its peak in 2025;
  • enjoying gross margins above 70% on some AI products.

Its CUDA software ecosystem and networking stack form a moat that competitors (AMD, Intel, Google TPUs, Amazon chips, Huawei and other Chinese challengers) are still struggling to cross.

Weaknesses:

  • Extreme dependence on one story: ‘AI keeps scaling up.’
  • Heavy exposure to export controls and U.S.–China tensions, with Chinese firms both stockpiling Nvidia chips and racing to build alternatives
  • Customer concentration risk: A handful of hyperscalers and model labs account for a significant share of demand.

In a scenario where the emphasis shifts from brute-force computing power to smarter algorithms and better data, Nvidia would still be central – but its growth and margins could come under serious pressure.


The critical battle: Open vs Proprietary AI

The central tension in AI is no longer purely technical. It is philosophical:

Centralised, closed platforms vs. decentralised, open ecosystems.

On one side:

  • proprietary frontier models;
  • tightly controlled APIs;
  • vertically integrated cloud + chip + model stacks.

On the other:

  • open-source or open-weight models (Llama, DeepSeek, Mistral, etc.);
  • community-driven tooling and evaluation;
  • local and specialised deployments outside big-cloud walled gardens.

Historically, open systems often win in the long run – think of the internet, HTML, and Linux. They become standards, attract ecosystems, and exert pressure on closed incumbents.

Two developments are especially telling:

  • China’s strategic embrace of open-source AI: Chinese labs, such as DeepSeek, utilise open weights and aggressive cost reduction to challenge U.S. dominance, turning open models into a geopolitical play.
  • The U.S. pivot toward open models in official policy: America’s AI Action Plan explicitly calls for ‘leading open models founded on American values’ and strongly encourages open-source and open-weight AI as a way to maintain technological leadership and set global standards.

As tech giants remain slow and reluctant to fully embrace open-source, a future crisis could give governments leverage:

Any bailout of AI giants could be conditioned on a mandatory shift toward open-weight models, open interfaces, and shared evaluation infrastructure.

Such a deal would not just rescue today’s players; it would amount to a strategic reset, nudging AI back toward the collaborative ethos that powered the early internet and many of the great US innovations.


What is going to happen? A crystal-ball exercise

Putting it all together, a plausible outlook is:

  • No dramatic global crash, because the systemic risk is now too large. AI is tightly woven into stock indices, sovereign wealth portfolios, and national strategies.
  • A series of smaller “pops” hitting the most over-extended frontier-model players and speculative start-ups.

In this view:

  • The main correction falls on OpenAI and Anthropic (Claude) and similar labs, whose valuations and burn rates are hardest to justify. They do not disappear; they are folded into larger ecosystems. OpenAI could become a Microsoft division; Anthropic could plausibly end up with Apple, Meta, or Amazon. 
  • Nvidia is another likely loser relative to its peak valuation. As the focus shifts from sheer computing power to smarter algorithms, open-source models, and improved data, the market may reassess the notion that every AI advance requires another order of magnitude of GPUs – especially as Google, AMD, custom ASICs, and domestically produced Chinese chips become more competitive.
  • The biggest winner of a small AI bubble burst could be Google. It has the most integrated AI lifecycle (from chips and models to consumer and enterprise applications) and strong cash flow from mature businesses. Among Big Tech, it may be best placed to ride out volatility and consolidate gains.

The main global competition will increasingly be between proprietary and open-source AI solutions. Ultimately, the decisive actor will be the United States that faces a fork in the road:

  • Double down on open-source, as signalled in the AI Action Plan, treating open models, shared datasets, and public infrastructure as strategic assets.
  • Slide back into supporting AI monopolies, whether via trade agreements, security partnerships with close allies, or opaque public-private mega-deals that effectively guarantee the revenues of a few AI and cloud giants.

The AI bubble will not be decided only by markets or by technology. It will be decided by how societies choose to balance:

  • open vs proprietary;
  • national security vs competition;
  • short-term financial stability vs long-term innovation.

The next few years will show whether AI becomes another over-priced digital toy – or a more measured, open, and sustainable part of our economic and political infrastructure.

]]>
https://www.diplomacy.edu/blog/is-the-ai-bubble-about-to-burst-five-causes-and-five-scenarios/feed/ 0
Enhancing rather than replacing humanity with AI https://www.diplomacy.edu/blog/enhancing-rather-than-replacing-humanity-with-ai/ https://www.diplomacy.edu/blog/enhancing-rather-than-replacing-humanity-with-ai/#respond Mon, 01 Dec 2025 12:52:06 +0000 https://www.diplomacy.edu/?post_type=blog&p=311989 A grandmother in Poland and her grandson, growing up in Dubai, sit together on a video call. She speaks only Polish, and he’s more comfortable in English. For years, their conversations have been limited to simple phrases, lots of gestures, and love communicated more through tone than words. But now, with AI-powered real-time translation running in the background, they’re having their first real conversation. She tells him stories from her childhood. He shares what he’s learning in school. They laugh at the same jokes. The technology disappears into the background, doing its quiet work of bridging a gap that once seemed insurmountable. What remains visible is simply this: a grandmother and grandson connecting in ways that weren’t possible before.

This is what AI looks like when it works the way it should, not replacing human connection but enabling it, not diminishing what makes us human, but amplifying it.

The narrative around artificial intelligence has grown heavy with anxiety. Open any news site, and you’ll hear concerns about job displacement, creative industries in turmoil, education undermined, surveillance enabled, and energy costs rising. These concerns are real and deserve serious attention. But in focusing almost exclusively on what AI threatens, we risk missing something equally important: the remarkable ways AI is already enhancing our capabilities and solving problems that have resisted solutions for generations.

If we want AI to develop in ways that serve humanity, not erode it, we need a more complete narrative that includes not just the fears and risks, but also the possibilities and the opportunities. The future of AI is not predetermined. It is being shaped right now by the choices we make, the applications we pursue, and the principles we embed in development and deployment.

The distinction that changes everything

Not all AI applications are created equal, and understanding the difference matters enormously. Some AI replaces human expertise, eliminating human roles, judgment, and agency in favour of algorithmic efficiency. But other AI extends what we can do, connects us in new ways, and unlocks abilities previously limited by wealth, geography, or circumstances.

When AI enhances our skills instead of replacing us, several key principles emerge:

  • Humans retain agency and choice regarding when and how to use the technology.
  • People’s judgment remains crucial, particularly for decisions that involve values, context, or individual circumstances.
  • Development is guided by principles of dignity, fairness, and flourishing, rather than solely by technical capabilities.
  • Individuals remain accountable for the outcomes of their decisions.

The technology itself is often similar. What differs is how it’s designed, deployed, and integrated into our activities. It’s the difference between a doctor using AI to increase diagnostic accuracy while still applying medical expertise and patient knowledge, versus an algorithm making medical decisions without human judgment. Or between AI helping a researcher process data to ask better questions versus AI replacing the research process entirely.

Where AI is already making a difference

While headlines focus on AI’s problems, extraordinary things are happening where AI amplifies human potential. These are real applications making tangible differences now.

AI is dissolving language barriers that have separated people for centuries. Real-time translation enables cross-cultural collaboration, helps travellers navigate foreign countries, lets businesses work across borders, and allows family members to communicate when they don’t share a common language. These aren’t perfect translations, but they’re good enough to enable a genuine connection that wasn’t possible before.

For people with disabilities, AI opens previously closed doors. Text-to-speech for those who cannot speak, speech-to-text for those who cannot write, and image description for those who cannot see. These transform accessibility from special accommodation to readily available capability, enabling millions to participate and be independent.

 Face, Head, Person, Selfie, Adult, Male, Man, Accessories, Bracelet, Jewelry, VR Headset, Clothing, Shorts, Footwear, Shoe

AI democratises expertise that was previously limited by resources. People in underserved areas have access to sophisticated medical diagnostics. Students receive personalised instruction without tutors. Small businesses have access to tools once available only to large corporations. Artists bring ideas to life without years of technical training.

In scientific research, AI accelerates discovery remarkably. Recent breakthroughs in using AI to design synthetic proteins for genome editing exemplify this potential. AI helps researchers identify patterns in vast datasets and explore possibilities that would take decades manually, not replacing scientists but enabling them to ask bigger questions and pursue answers faster.

In creative domains, thoughtfully used AI elevates artistry instead of replacing it. Musicians experiment beyond their instrumental skills, writers brainstorm before crafting their own work, and designers rapidly prototype before detailed execution. AI handles technical exploration while humans provide vision, meaning, and creative spark.

These applications share key characteristics: they boost abilities without replacing judgment, enable bonding over isolation, provide a more equal access instead of concentrating advantage, and serve our purpose over mere technical possibility.

What makes successful human-AI collaboration work

Looking across these examples, patterns emerge that point toward how AI can successfully augment humanity, without undermining it.

Successful applications preserve human agency. People choose when and how to use AI assistance based on their needs and judgment. A doctor decides whether to consult AI diagnostic tools for a particular patient. A translator chooses whether to use AI for communication or rely on their own language skills. A researcher decides which AI-suggested patterns are worth investigating further. The technology remains a tool that we control, not a system that operates autonomously over human domains.

Human judgment stays central, especially for important decisions. AI provides information, analysis, or options, but people make the calls, particularly on matters involving ethics or individual circumstances requiring contextual understanding. This means that we use AI-provided information as one input among many in genuine decision-making processes.

These applications do not replace human bond but facilitate it. Translation AI helps people communicate across language barriers. Accessibility AI helps people participate in conversations and communities. Communication tools help people articulate thoughts more clearly. In each case, technology serves as a bridge between people, not a replacement for human relationships.

The best applications embed our ideals from the start. Developers ought to ask: Does this elevate human dignity? Does it respect autonomy? Does it promote fairness and inclusion? Does it advance our well-being or just efficiency? These questions shape design choices and feature development, making the resulting technology more humane and more likely to produce positive outcomes.

Problems arise when these standards are violated – AI imposed instead of chosen, algorithms bypassing people’s oversight, systems replacing connection, design favouring efficiency over impact, accountability so diffused no one answers for it. When AI decisions affect lives, specific people must be accountable. This drives thoughtful implementation and recourse for failures.

This pattern suggests something hopeful: the path toward beneficial AI isn’t primarily about developing more sophisticated technology, though that matters. It’s about making better choices about how we design, deploy, and integrate AI into human activities. And those are choices we can make, right now, guided by clear visions about what we value and what we’re trying to achieve.

 Outdoors, Nature, Sea, Water, Person, Leisure Activities, Sea Waves, Sport, Surfing, Adventure, Scuba Diving, Carequinha

The future we’re choosing

The story of AI and humanity has not yet been written. We’re living through its opening chapters, and the plot remains genuinely uncertain. We have more control over AI’s undetermined future than we might think. 

AI development is not some unstoppable force beyond our control. It’s shaped by developers, institutions, policymakers, and all of us as we use these technologies. Every positive AI application exists because people built something serving our ethics. Every problem exists because priorities have shifted toward efficiency over dignity, novelty over safety, or profit over human impact. AI will continue to advance and integrate into more areas of life. But whether that promotes our thriving depends on us, through policy and regulation, as well as through design decisions, institutional priorities, and individual judgments about when and how to use these tools. It depends on the questions we ask, the examples we celebrate, the principles we insist on, and the futures we choose to build.

Right now, amid valid concerns about displacement, manipulation, and loss of human agency, there are also real examples of AI fostering bonds, broadening access to expertise, and solving problems that resisted previous solutions. Both are true. Both matter. But the balance between them is not fixed. It shifts based on where we direct attention, energy, and resources.

If we focus exclusively on fears, we risk allowing AI to develop in ways that serve narrow interests, simply because broader society has disengaged. But if we can hold both the concerns and the possibilities, both the caution and the curiosity, we create space for AI development led by human ideals toward human prosperity.

The technology that worries us might also help us, but only if we stay engaged rather than retreat into pure resistance. We need to articulate positive visions worth building and celebrate what’s working while addressing what isn’t. We must insist that technology serves humanity and not accept that humanity must simply adjust to whatever technology produces.

Various examples mentioned are glimpses of what becomes possible when AI improves human capabilities and when human values guide innovation. That’s the future worth building and the story worth telling. And it starts with recognising that amid all the valid concerns about AI, something remarkable is happening, worth understanding, celebrating, and expanding. The possibility of AI that genuinely enhances rather than replaces our humanity is already beginning to unfold, one enabled connection at a time.

Author: Slobodan Kovrlija

]]>
https://www.diplomacy.edu/blog/enhancing-rather-than-replacing-humanity-with-ai/feed/ 0
Heidi through a Chinese lens https://www.diplomacy.edu/blog/heidi-or-renxin/ https://www.diplomacy.edu/blog/heidi-or-renxin/#respond Mon, 01 Dec 2025 09:38:39 +0000 https://www.diplomacy.edu/blog/heidi-or-renxin/

The Nightingale (2013) is a movie that plays in Beijing and Guilin, China. It is a French–Chinese coproduction from Philip Muyl, a French director, who wrote it and directed a fully Chinese cast. The story has many elements recalling the character of Heidi, the Swiss girl who changes the world around her. Yet, the story is, in many subtle ways, typically Chinese – at least the way I read it. A cross-cultural comparison is worthwhile, even if my reading is probably no more a stammering.

Scene from the Sino–French co-production The Nightingale
Scene from the Sino–French co-production ‘The Nightingale’.

The pre-adolescent daughter of wildly successful parents, Renxin has been brought up with everything but much parental attention. It is not that they are indifferent toward their only child: they are just way too busy.

As the story unfolds, Zinghen, her grandfather, is taking her to his home village. He is carrying with him a sort of mynah bird in a cage (hence the film’s title). His wife, long since deceased, had given it him to remember her by, as he left for Beijing to work and provide his son with educational opportunities. The bird, now old, should sing one last time on her grave and, in this way, testify to the fact that they have fulfilled their parental duties.

Renxin is more than reluctant to take the trip, and in the beginning takes revenge in petty ways on the long-suffering grandfather. Zinghen’s attitude is neither one of resignation to the granddaughter’s antics, nor one of anger and reprimand. His infinite, good-humoured patience signals a Confucian attitude – we can all learn, but that learning can take a long, long time. He is practising wu-wei, I think: ‘Wu-wei translates as “no trying” or “no doing,” but it’s not at all about dull inaction. In fact, it refers to the dynamic, effortless, and unselfconscious state of mind of a person who is optimally active and effective. It is probably best rendered as something like “effortless action” or “spontaneous action”’ (see Trying Not to Try: The Art and Science of Spontaneity by Edward Slingerland). When in wu-wei, people have a seemingly magical effect on those around them, changing them.

The way of Heaven
Excels in overcoming, though it does not contend;
In responding, though it does not speak;
In spontaneously attracting, though it does not summon;
In planning for the future, though it is always relaxed.
Laozi

With the help of circumstances – grandfather and Renxin get lost in the forest and have to sleep in a cave – the two grow close. Renxin begins to view the world with curious eyes. Hosted in a village after the long vagrancy, she plays with children, helps in the harvest, and possibly for the first time, feels life carrying her in its flow, rather than trying to master and use it. Subtly, Renxin is changing with the season and amidst friends.

The home village is full of small surprises for Renxin: it is both traditional and modern. She tries to barter with a five-year-old, only to see her iPhone rejected as ‘fossil’ – not the latest version. A young man is about to leave for Bordeaux, where he’ll circumnavigate the world in a sailing boat, together with a Frenchman. He already savours the challenge of the ‘extreme’. Not so Zinghen, who sees the ‘extreme’ as a threat to his inner harmony.

The village becomes the place where Renxin can put into practice the unspoken lessons she has learned from her grandfather. The father arrives unexpectedly, wanting his daughter back. Thanks to her, father and son, who had been estranged, talk to each other and say sorry for past grudges. Renxin does it again in Beijing: cleverly, she arranges for a new lease on the life of the parents’ marriage, which was about to founder.

Heidi in the Johanna Spyri’s novels, like Renxin, changes the world around her: the grumpy grandfather, the visitors from the lowlands, all take a ‘turn for the better’. Swiss mountain idyll ensues. There is a fundamental difference between the two girls, however. Heidi changes the world so that she need not change herself. She has set core values, and with them she makes her world safe, yet unchanging. Renxin’s great discovery is that life changes her. She begins to savour her discovery of values, friendships, gifting, and this acceptance of silent transformations gives her opportunities subtly to influence her world in turn. Only by changing herself can she change others.

Heidi (2015), a new film version of Spyri’s classic, is a useful contrast to Renxin’s quieter journey of letting life change her.
Heidi (2015), a new film version of Spyri’s classic, is a useful contrast to Renxin’s quieter journey of letting life change her.

One is fascinated by this contrasting worldview. I experienced it recently in reading Yamabuki by Aki Shimazaki. This short Japanese novel is an elegiac take on the life of two ordinary people, who fall in love as they cross each other on a long-distance train, marry, and experience the end of their life together. Nothing more, and yet much more than the tribulations of dysfunctional people, who seem to attract the prurient interest of Western readers nowadays.

The post was first published on DeepDip.

Explore more of Aldo Matteucci’s insights on the Ask Aldo chatbot.  

]]>
https://www.diplomacy.edu/blog/heidi-or-renxin/feed/ 0
Why is Shadow AI dangerous for diplomats? https://www.diplomacy.edu/blog/why-is-shadow-ai-dangerous-for-diplomats/ https://www.diplomacy.edu/blog/why-is-shadow-ai-dangerous-for-diplomats/#respond Sat, 29 Nov 2025 16:12:48 +0000 https://www.diplomacy.edu/?post_type=blog&p=311840 Shadow AI  refers to the use of tools like ChatGPT and DeepSeek, often on personal accounts for clearly official tasks, such as drafting reports from meetings, preparing talking points, translating notes, or summarising lengthy documents, often on personal accounts or consumer versions of AI tools.

For diplomats, this is deeply attractive as LLMs provide speed, efficiency, and stylistic polish.

Yet diplomacy is a profession of discretion, controlled ambiguity, and sometimes secrecy. Shadow AI  introduces a structural contradiction: the more diplomats rely on commercial AI platforms, the greater their risk of undermining the confidentiality and discretion on which diplomatic practice is based.

Behind Shadow AI lies the ‘two-speed’ problem of rapid technological changes and slow institutional adaptation. Diplomatic services take years to provide secure, in-house AI solutions. In the meantime, AI platforms are literally one click away on diplomats’ phones and laptops. 

The paradox is that secure in-house AI, based on open-source models, is technically feasible and financially affordable. The bottleneck for AI transformation is much less technical than organisational: how foreign ministries conceptualise, govern, and reorganise knowledge, which is their core asset. The experience and curiosity of those who experimented with LLMs in Shadow AI style should be considered as a critical asset.


Historical echo: from ‘digital dark age to Shadow AI 

Shadow AI  is not the first time that digital tools have outpaced institutional memory practices. Archivists have warned of a “digital dark age” to describe how records from the late 1990s and early 2000s were lost because institutions were still geared to paper files, while records increasingly existed only in electronic form: emails, early websites, and word-processing files.

A 2024 Pew Research Centre study illustrates how fragile digital memory can be: 38% of webpages that existed in 2013 were no longer accessible by 2023, and about a quarter of all pages seen at any point between 2013 and 2023 had disappeared by late 2023. Much of this loss is unintentional: links break, hosting is discontinued, formats become obsolete. But the effect is a “black hole” in institutional and societal memory.

To recover these traces, “digital archaeologists” scour obsolete storage media, long-abandoned websites, and private email archives, attempting to reconstruct what institutions once knew and decided.

Shadow AI  risks creating a similar grey zone in diplomatic memory, but now the problem is not just loss, but exposure. Instead of archives failing to capture digital activity, we have highly capable external platforms quietly capturing sensitive institutional knowledge through everyday use, without any structured archival control on the diplomatic side.


What is Shadow AI?

IBM defines Shadow AI  as ‘the unsanctioned use of AI tools or applications by employees without formal approval or oversight of the IT department’. ShadowAI is not a marginal behaviour. Recent research indicates that a large majority of organisations have employees using unapproved AI tools at work, and around one-third of AI-using employees openly admit to sharing sensitive work data with AI platforms without permission. Analysts, such as Gartner, project that by 2030, around 40% of enterprises will experience security or compliance breaches linked to shadow AI.IT Pro

In diplomacy, the incentives for shadow AI are even stronger:

  • Diplomatic work is text- and language-heavy. LLMs are exceptionally good at precisely those tasks: drafting, translating, and summarising.
  • Diplomatic issues are increasingly technical (AI governance, cyber norms, trade rules, digital taxation), making quick access to synthetic explanations and drafts extremely tempting.
  • Many ministries still lack secure, user-friendly in-house tools, while consumer AI services are polished, powerful, and familiar from personal use.

The result is a fertile environment for shadow AI to emerge as a normal, if unofficial, part of diplomatic practice.

The corporate crackdown on Shadow AI begins

Major corporations are taking decisive steps to mitigate the risks of Shadow AI, the unauthorised use of external AI tools by employees. As reported by Reuters, Amazon has mandated that its 250,000 developers cease using all AI platforms except its own, named Kira. The primary motivation is to safeguard intellectual property and prevent competitors from accessing proprietary software solutions.

This trend is also evident in the banking sector, where financial institutions are banning Shadow AI, perceiving it as a dangerous vulnerability that could leak invaluable business and banking secrets.

Everyday Shadow AI practices – and why they are risky

Chatbots as informal advisers

The most visible form of shadow AI is simple: a diplomat opens ChatGPT or another chatbot in a browser, types a question, and gets an answer. But questions themselves are data. They reveal:

  • underlying assumptions (“What if State X refuses to sign…”),
  • priorities and interests (which issues a mission worries about),
  • negotiation strategies (“How could we respond if the other side insists on…”), and
  • internal constraints (“Draft arguments we could use given that we cannot accept clause Y.”).

Across many queries, an external provider could reconstruct a strikingly detailed picture of a country’s concerns, red lines, and preferred framings. Even if no single prompt is highly sensitive, the behavioural pattern revealed over hundreds of prompts is.

Moreover, chat logs, questions, plus follow-up comments on the answers can build a rich behavioural profile of individual diplomats: their style, risk appetite, thematic focus, and even psychological traits. For diplomacy, where strategic opacity and controlled signalling are often integral to negotiation, this is a non-trivial leak.

Drafting: from reports to speeches

Diplomats draft constantly: reports to capitals, minute-by-minute readouts of negotiations, non-papers, letters, talking points, speeches. LLMs are extremely helpful here: they can clean language, reorganise arguments, and adapt a text for different audiences. However, the risks are layered:

Confidentiality of content
To achieve good outputs, users typically paste in detailed context, including names of interlocutors, meeting dynamics, sensitive assessments, or internal positions. This material may then be stored on servers controlled by foreign private companies and potentially subject to foreign legal processes.

Textual inflation and erosion of diplomatic craft
LLMs are optimised to produce fluent, abundant prose. They make it easy to generate long texts with little effort. This can lead to inflation of diplomatic text: more pages, less signal. Quantity risks overtaking quality and genuine insight.

As it becomes tacitly understood that “AI probably wrote this,” diplomats may read less attentively, skim more, and treat long documents as boilerplate. Important nuance can be buried in standardised paragraphs, undermining the precise, carefully crafted language that diplomacy relies on.

Convergence of language and positions
If many diplomats rely, even partially, on similar AI systems, their texts may converge towards similar framings and metaphors. Subtle national perspectives and political nuances risk being flattened into generic “AI-speak,” eroding the distinct voice and normative positions that are part of diplomatic identity.

Translation: speed at the cost of confidentiality

Multilingualism is central to diplomacy. AI translation services are widely used because they are fast, accurate, and easy.  But submitting internal or confidential texts to commercial translation services exposes those texts to the service providers. Even if the provider claims it does not store or use data for training in certain modes, the diplomat must trust:

  • that the settings are correctly configured;
  • that logs are properly handled; and
  • that no future policy, breach, or legal order will change how that data is processed.

In practice, a stream of translations can reveal which documents are considered important, which languages are prioritised, and where sensitive bilateral or multilateral engagements are intensifying.

Summarisation: compressing nuance

Summarisation tools are attractive for diplomats facing hundreds of pages of negotiations, resolutions, or reports. Feeding large volumes of text into AI to get summaries is now a common practice. Risks include:

  • External mapping of internal activity – summaries are generated only if the full documents are supplied; this provides external platforms with detailed content and structure of internal debates, even if the outputs remain within the ministry.
  • Loss of nuance – diplomatic texts often contain intentional ambiguity, layered signalling, or carefully balanced wording. Automated summarisation tends to collapse nuance, which can distort how issues are perceived internally and externally.
  • Hidden bias – if summaries are used for decision-making, the model’s implicit biases in what it highlights or downplays can subtly reshape policy priorities.

Visualisations and presentations

As graphs, infographics, and slide decks become standard in multilateral meetings, diplomats increasingly rely on AI tools that can generate presentations, diagrams, and “data stories.”

Uploading datasets, internal statistics, or draft messages into these tools carries the same confidentiality risks as text-based usage. In addition, visualisations can fix certain interpretations of data as “the” narrative, sometimes oversimplifying complex political balances into easily digestible—but misleading—graphics.


Where do Shadow AI risks materialise? 

At a technical level, interaction with AI platforms can be intercepted at several points:

  • between the user’s device and the AI platform (network level);
  • on the platform side (storage, internal logs, training pipelines);
  • via third-party integrations, browser extensions, or plugins.

Even without interception, AI companies have full control over the inference process. They hold large databases of prompts and outputs which, in many cases, can be used for model improvement, product analytics, or security monitoring.

Commercial incentives usually push companies to protect user data. Trust is at the heart to their business model. However, they are embedded in legal jurisdictions. In both the United States and China, home to many leading AI providers, laws allow authorities, under certain conditions, to request access to stored data, including service logs and user interactions. For diplomatic services, there is no recognised diplomatic immunity that shields such data from subpoena or security requests.

This creates a strategic vulnerability: sensitive diplomatic reasoning may, unintentionally, become accessible to foreign authorities through perfectly legal channels directed at private companies, rather than through classical espionage or hacking.


Why training and awareness-building are not enough

Standard responses to new digital risks are familiar: awareness-building campaigns, guidance notes, and training. While useful, they have clear limits in the context of Shadow AI.

Experience from basic cybersecurity hygiene is instructive: despite years of training, people still reuse passwords, click on phishing links, or write credentials on sticky notes. Awareness alone rarely overcomes powerful incentives and habits. With AI, the incentives to overlook safety concerns are even stronger as AI offers efficiency (saving hours of drafting or translation), quality (improved language, structure, and clarity), and immediacy (answers on demand, without bureaucratic delays).

For a diplomat under time pressure, these “carrots” will usually outweigh risk concerns, often perceived as abstract. It is unrealistic to expect that mere awareness will stop shadow AI, especially when sanctioned alternatives are weak or absent. Thus, the policy question is not whether diplomats will use AI; they will, but which AI they will use, under whose control, and with what safeguards.


Towards solutions: in-house AI as the realistic path

If Shadow AI  is a symptom of unmet needs, then the primary solution must be to meet those needs safely. For diplomatic services, these points aim to build or procure in-house AI systems, based on open-source models and tailored to the diplomatic context. The main champions of AI transformation should be those who have shown initiative and curiosity in experimenting with LLMs in Shadow AI style. Building on them as critical asset for changes, other elements of such a solution should include:

Local control of data and models

  • Deploy models on infrastructure controlled by the diplomatic service (on-premises or in trusted government clouds).
  • Ensure that all prompts, documents, and outputs remain within controlled environments.
  • Treat chat logs and generated texts as part of the diplomatic archive, subject to the same rules as cables and official correspondence.

Training models on diplomatic knowledge

  • Fine-tune models using internal documents, glossaries, and style guides to ensure outputs align with institutional practice and terminology.
  • Preserve and enrich the core asset of diplomatic services, knowledge, rather than leaking it to external providers.

Clear governance and guardrails

  • Define which categories of information may be processed by AI tools and which must never be entered (e.g. highly classified intelligence).
  • Implement role-based access, logging, and oversight mechanisms.
  • Integrate AI use into existing rules on records management, classification, and archival practice.

Smart gateways to the outside world

  • Where external AI services are needed (for example, to obtain the most up-to-date open-source information), route them through controlled “gateways” that strip or anonymise sensitive content.
  • Distinguish clearly between internal deliberative content (never exposed) and public, open-source material.

Redesign of workflows, not just “new tools”

  • AI adoption should prompt a rethink of how ministries organise drafting, translation, reporting, and analysis.
  • Instead of adding AI as an afterthought, redesign workflows so that human judgement focuses on negotiation, strategy, and relationship-building, while AI handles well-defined, lower-risk tasks.

In this way, diplomatic services can address shadow AI not by trying to forbid AI outright, which is likely to fail, but by offering equally powerful, safer alternatives that match diplomats’ practical needs.


Conclusion: AI from shadow to stewardship

Shadow AI is dangerous for diplomacy, not because AI is inherently hostile to diplomatic values, but because unsanctioned, externally controlled AI quietly erodes three foundations of diplomatic practice:

  • discretion and confidentiality (through uncontrolled data flows),
  • craft and nuance (through textual inflation and convergence), and
  • institutional memory and autonomy (through dependence on external platforms).

The historical lesson from the “digital dark age” is that institutions which fail to adapt their record-keeping and knowledge practices to new technologies pay a high price later in lost institutional memory, weakened accountability, and diminished strategic capacity. Shadow AI  extends this risk from memory to live negotiation and strategy.

The way forward is not a nostalgia for pre-digital diplomacy, nor a naïve embrace of consumer AI tools. It is the deliberate construction of trusted, in-house AI ecosystems that embed diplomatic values – discretion, reliability, balance – into the very architecture of the tools diplomats use every day. Only then can diplomacy move from being a passive consumer of Shadow AI  to an active steward of AI in the service of international relations.

]]>
https://www.diplomacy.edu/blog/why-is-shadow-ai-dangerous-for-diplomats/feed/ 0
The Death of the ‘Gentleman’s game’? Cricket as proxy war in South Asia https://www.diplomacy.edu/blog/the-death-of-the-gentlemans-game-cricket-as-proxy-war-in-south-asia/ https://www.diplomacy.edu/blog/the-death-of-the-gentlemans-game-cricket-as-proxy-war-in-south-asia/#respond Thu, 27 Nov 2025 08:48:23 +0000 https://www.diplomacy.edu/?post_type=blog&p=311585 There is a long-standing theory in South Asian diplomacy that cricket functions as a safety valve. For decades, the pitch was viewed as a ‘demilitarised zone’, a rare space where India and Pakistan could engage without the immediate threat of escalation. This concept, often referred to as ‘cricket diplomacy’, assumes that cultural exchange serves as a precursor to political dialogue. From General Zia-ul-Haq’s famous visit to Jaipur in 1987 to the frantic goodwill tours of the mid-2000s, the sport was used to humanise the enemy.

However, the events of September 2025 suggest that this theoretical framework is now obsolete. During the Asia Cup in Dubai, the ‘Gentleman’s game’ ceased to be a tool for engagement and became a theatre of conflict. We are no longer witnessing the use of soft power to build bridges; instead, we are seeing the weaponisation of sport to reinforce hard borders.

The handshake that never happened

The most significant moment of the 2025 Asia Cup did not involve a bat or a ball. It occurred during the post-match ceremonies, usually a mundane affair of handshakes and polite applause. Following the final, the victorious Indian team broke with decades of sporting tradition by refusing to shake hands with the Pakistani players. This was followed by a refusal to accept the Asian Cricket Council (ACC) championship trophy from its president, Mohsin Naqvi.

asia cup trophy mohsin naqvi
ACC staff leaving with the trophy (L), ACC chairman Mohsin Naqvi. Photo AP /Altaf Qadri

To the casual observer, this might appear to be mere petulance. However, from a diplomatic perspective, it is a calculated signal. Mr Naqvi currently serves as a minister in the Pakistani government, holding a portfolio that blurs the line between sports administration and state governance. By refusing the trophy, New Delhi was not only snubbing a cricket official but also rejecting the legitimacy of a Pakistani state representative.

This incident marks a departure from the ‘compartmentalisation’ strategy of the past. Previously, India and Pakistan attempted to separate cultural ties from political disputes. The breakdown of basic sporting etiquette in Dubai indicates that this separation is no longer tenable. The message is clear: there is no neutral ground, and even the cricket pitch is now subject to the strictures of bilateral hostility.

‘Operation Sindoor’ on the pitch

To understand the severity of this shift, one must contextualise the match within the broader geopolitical environment. The shadow of ‘Operation Sindoor’, India’s military response to the Pahalgam terror attacks in May 2025, loomed large over the tournament. In previous crises, sport was used to signal a return to normalcy. In this instance, it was used to extend the conflict.

The rhetoric following the match confirms this transition. Prime Minister Narendra Modi’s statement, which characterised the victory as ‘Operation Sindoor on the games field’, is analytically significant. It represents the militarisation of sporting success. By explicitly linking a cricket match to a military operation, the state effectively conscripted the athletes into the national security apparatus.

This rhetorical move serves a specific function in domestic politics. It validates the government’s hardline stance by performing dominance on a global stage. For the academic observer, this is a clear example of how soft power assets can be co-opted to serve hard power narratives. The cricket match was not treated as a break from the war; it was presented as a continuation of it.

The weaponisation of engagement

Scholars of international relations often distinguish between engagement (maintaining contact) and containment (isolating an adversary). The current dynamic between India and Pakistan represents a hybrid phenomenon: belligerent engagement. The two nations are not boycotting each other, which would be a form of disengagement, but are instead using the platform of engagement to publicly display their animosity.

This strategy serves two primary purposes:

  1. Audience costs: By refusing handshakes and trophies, the Indian team signals to its domestic support base that there is no ‘business as usual’ with Pakistan. It is a low-cost way to demonstrate resolve without firing a shot.
  2. Institutional paralysis: The incident highlights the weaknesses of international sporting bodies such as the ACC and the International Cricket Council (ICC). These institutions function on the premise of cooperation. When member states decide to use the institutions as battlegrounds, the governance structure collapses.

The implications for future diplomacy are grim. Suppose the protocols of a cricket match, the most regulated and ritualised form of interaction between the two neighbours, cannot be maintained. In that case, it is difficult to see how complex diplomatic negotiations can survive.

The romantic ideal of cricket diplomacy was predicated on the belief that shared cultural passions could transcend political divides. The 2025 Asia Cup has likely put that ideal to rest. We have entered an era in which sport is no longer a sanctuary from realpolitik, but a mirror reflecting its ugliest contours.

As we analyse the fallout, the loss is not just sporting, but diplomatic. The pitch was one of the few remaining spaces for dialogue, however symbolic. By turning it into a proxy battlefield, both nations have reduced the space for de-escalation. The ‘Gentleman’s game’ has not just lost its manners; it has lost its utility as a vehicle for peace.

]]>
https://www.diplomacy.edu/blog/the-death-of-the-gentlemans-game-cricket-as-proxy-war-in-south-asia/feed/ 0
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom https://www.diplomacy.edu/blog/rethinking-learning-hope-solutions-and-wisdom-with-ai-in-the-classroom/ https://www.diplomacy.edu/blog/rethinking-learning-hope-solutions-and-wisdom-with-ai-in-the-classroom/#respond Mon, 24 Nov 2025 12:52:56 +0000 https://www.diplomacy.edu/?post_type=blog&p=311493

Adapting to AI requires more than quick solutions.

Previously, we explored the deep, very real risks AI introduces to the learning process. Risks to curiosity, to persistence, and to meaningful understanding. Yet to focus only on the dangers is to miss the profoundly transformative potential of AI when integrated thoughtfully. The question becomes: What might we gain, and how do we move forward?

What we might gain (if we’re intentional)

Dismissing AI’s potential role in education is both futile and misguided. The technology exists, students are using it, and that won’t change. The question is whether we can find ways to integrate AI that enhance rather than replace learning.

For students without access to private tutors or educated parents who can help with homework, AI provides something unprecedented: personalised assistance available at any time. A student struggling with calculus at midnight can get explanations, examples, and guided practice. A student whose first language isn’t the language of instruction can get help understanding complex texts. My friend, Milena Vukasinovic, German language professor, mentioned that some of her students have used AI this way, to genuinely understand material they found confusing, to get additional explanations from different angles, to practice concepts until they clicked.

The technology also enables new forms of adaptive learning that were previously impossible at scale. An AI tutor can identify precisely where a student’s understanding breaks down, adjust explanations to different approaches, provide infinite patience, and never make a student feel stupid for asking the same question multiple times.

While noting the obstacles, Milan Maric, Media Studies Professor, also sees potential. When students understand the fundamentals of film production, AI tools can help them experiment with ideas more quickly, iterate on concepts, and explore possibilities that would be prohibitively time-consuming to test manually. When students grasp the fundamentals, AI is an amplifier of knowledge, not a replacement.

Suppose AI (as with previous technologies) frees educators from focusing solely on repetitive memorisation and routine problem-solving, tasks that technology can handle with ease. In that case, it opens the door to prioritising critical thinking, creativity, ethical reasoning, and distinctly human capabilities. At the same time, we shouldn’t dismiss the value of memorisation altogether; knowing foundational facts and concepts by heart helps fuel reflection, analysis, and deeper understanding. The real challenge is finding intentional ways to combine these skills, rather than hoping technology alone will transform education.

Why don’t we have good answers yet

Here’s the uncomfortable truth: educators, policymakers, and parents are largely improvising. We don’t have established best practices because this situation is unprecedented. We don’t have studies showing what works because AI capabilities have advanced faster than research cycles. We don’t have clear guidelines because the technology keeps changing, and what’s true about AI’s capabilities today may no longer be valid six months from now.

Professor Vukasinovic said something telling: ‘It’s really hard and nobody really has any solution’. This honesty is refreshing after encountering so many confident proclamations about how to ‘solve’ AI in education. The reality is messier than the solutions being proposed.

Some schools try to ban AI outright, but enforcement is nearly impossible, and students become more secretive about its use. Some teachers design ‘AI-proof’ assignments, only to find students finding new workarounds or AI capabilities expanding to handle previously safe assignment types. Some educators embrace AI fully but then struggle to distinguish between students who understand the material and those who are skilled at prompting.

Oral examinations might help by requiring students to explain their reasoning in real time, demonstrating understanding through spontaneous explanation rather than produced text. But this approach is time-intensive, requires trust in professional judgment over standardised metrics, and faces resistance from educational systems obsessed with quantifiable assessment. While written tests have increasingly become exercises in recognising correct answers from multiple choices, oral exams demand genuine understanding. Yes, they involve subjective judgment, and yes, some students will complain about fairness. But the real world operates on subjective human judgment. Job interviews, professional presentations, and negotiations all require the ability to think on your feet and articulate ideas to sceptical audiences. Perhaps AI is actually forcing us back toward more rigorous, if less scalable, forms of assessment.

The transformed role of teachers

If AI can deliver information and explanations, what’s left for teachers to do? My friend’s experience suggests the answer: everything that makes education actually work.

She recognises when students have used AI not through sophisticated detection software but through human observation, noticing the mismatch between a student’s typical work and their submission, understanding which grammar concepts they have and haven’t learned, and seeing the disconnect between what they can produce and what they can explain. This kind of nuanced assessment requires human judgment that no automated system replicates.

More fundamentally, she cares about whether her students actually learn German, develop their thinking abilities, and grow as people. An AI will generate whatever you ask for with equal enthusiasm, whether it’s helping you learn or helping you cheat. A teacher has an investment in students’ capacity development that no algorithm can provide.

A diagram of a diagram of a learning process
Source: PowerSchool

Teachers also provide something increasingly precious: human attention and relationship. In a world of infinite digital content and AI assistance, the attention of a real person who notices whether you’re struggling, who adjusts explanations based on your specific confusion, who models intellectual curiosity and integrity, this becomes more valuable, not less.

But teachers need support. They need professional development around AI literacy, reasonable class sizes that allow for individual attention, institutional backing when they try new approaches, and recognition that they are navigating unprecedented challenges without clear roadmaps. Expecting teachers to solve the AI problem on their own, in addition to everything else they’re responsible for, is neither fair nor realistic.

Adaptation takes time and effort

Here’s what history teaches us about technological disruption: adaptation doesn’t happen automatically. It requires sustained effort, experimentation, willingness to fail and try again, and most importantly, time.

When calculators entered classrooms, it took years to figure out how to teach mathematics using calculators as tools while ensuring students still developed number sense and mathematical thinking. When the internet became ubiquitous, it took time to teach information literacy, source evaluation, and digital citizenship. These adaptations happened through countless small experiments by individual teachers, through curriculum development, through gradual cultural shifts in what we expected from students and education.

The same process is happening now with AI, but we’re in the early, messy phase where more questions than answers exist. Both professors have been teaching for years; they are experienced and thoughtful and admit they’re struggling to figure this out. That’s not a failure on their part but the reality of confronting a genuinely new challenge.

What gives me cautious optimism is that humans are remarkably adaptive. We’ve navigated previous technological disruptions not because we’re brilliant at predicting consequences or designing perfect solutions, but because we’re willing to keep trying, adjusting, and learning from what doesn’t work. The students who currently see AI purely as a shortcut tool will eventually encounter situations where shortcuts don’t work, where genuine understanding matters, and where they’ll need capabilities they didn’t develop. Those experiences will teach lessons that no amount of adult warnings can convey.

But this adaptation won’t happen without effort. It requires educators willing to experiment with new approaches even when they’re exhausted. It requires students willing to choose difficulty over efficiency sometimes, to trust that the struggle has value. It requires parents who reinforce the importance of genuine learning rather than just grades and credentials. It requires institutions that support innovation rather than demanding impossible guarantees that new approaches will work. It requires all of us to take this seriously, rather than either panicking about AI destroying education or dismissing concerns as technophobia.

 Ice, Nature, Outdoors, Sea, Water

Principles worth pursuing (even without certainty)

While we don’t have definitive solutions, some principles seem worth pursuing as we figure this out:

Honest conversation over prohibition: Blanket bans don’t work and drive AI use underground, where students learn nothing about responsible use. Better to acknowledge AI’s presence and have explicit discussions about when and how its use is appropriate, what counts as academic integrity, and why genuine learning matters beyond just completing assignments.

Focus on process and understanding, not just on outputs: If AI can produce polished final products, the assessment should value the journey more than the destination. This might mean requiring students to show their work and explain their reasoning, using portfolios that demonstrate growth over time, incorporating reflection where students articulate what they learned, and evaluating through conversation and presentation rather than just submitting text.

Preserve difficulty where it’s pedagogically necessary: Not every challenge should be smoothed away. Some intellectual struggle is essential for development. This might mean specific assignments done without AI assistance, certain skills practised until they become automatic, and certain knowledge that must be internalised rather than outsourced. The specifics will vary by subject, age, and context, but the principle matters: some things should remain difficult.

 Crowd, Person, Face, Head, Audience

Develop AI literacy as a core skill: Students need to understand what AI is and isn’t, where it excels and where it fails, how to evaluate its outputs critically, when to rely on it and when to think independently. This isn’t just about AI tools but about developing the judgment to use powerful technology wisely.

Maintain human connection: In an age of increasing automation and digital mediation, the human relationships that education provides become more precious. Teachers who know their students, classrooms where students learn from each other, the social dimension of learning; these need to be protected and prioritised, not sacrificed to efficiency.

The work ahead

Professor Vukasinovic’s question, ‘Why do we even need this?’, deserves serious consideration. Maybe we don’t need AI in elementary education. Perhaps the costs outweigh the benefits at certain ages or in certain contexts. Maybe the optimal amount of AI in schools is less than what we currently have, not more.

But whether we ‘need’ it or not, it exists and students have access to it. The question becomes: how do we respond to this reality? Do we pretend it’s not happening? Do we attempt bans we can’t enforce? Do we give up on genuine learning and accept that education is now about managing AI tools? Or do we do the hard, uncertain work of figuring out how to preserve what matters about learning while existing in a world where AI is everywhere?

I don’t have confident answers, and I’m suspicious of anyone who does. What I do have is the conviction that the work matters. Education isn’t just about preparing workers for the economy, though that’s part of it. It’s about helping young people become thoughtful, capable, curious human beings who can navigate complexity, think independently, continue learning throughout their lives, and contribute meaningfully to society.

AI hasn’t changed these fundamental purposes. If anything, it makes them more urgent. The students currently in school will live their entire adult lives alongside AI far more capable than what we have today. They need to develop not just knowledge but judgment, not just skills but wisdom, not just the ability to use AI but the awareness of when not to.

That development requires genuine learning experiences, the kind that involve struggle, confusion, mistakes, breakthroughs, and the gradual construction of understanding. It requires adults who care enough to insist on real learning, even when shortcuts are available. It requires students willing to choose the more challenging path sometimes, to trust that the difficulty has purpose.

We will eventually adapt to AI in education. But that adaptation will not come easily or automatically. It will require sustained effort from multiple directions. Teachers experimenting with new approaches, students taking responsibility for their own learning, parents supporting genuine education over just credentials, institutions providing resources and flexibility, society valuing learning itself rather than just the certifications it produces.

The conversation about AI in education is just beginning. We need it to be honest rather than reassuring, focused on real challenges rather than theoretical solutions, and willing to admit uncertainty while still taking the problem seriously. Neither the people I spoke to nor I have the answers, but we are asking the right questions. And in the messy early stages of a profound technological shift, that might be the best we can do: Keep asking, keep trying, keep caring whether students actually learn. The answers will emerge through countless small experiments, adjustments, and discoveries. The work ahead is difficult, uncertain, and absolutely necessary.

Author: Slobodan Kovrlija

]]>
https://www.diplomacy.edu/blog/rethinking-learning-hope-solutions-and-wisdom-with-ai-in-the-classroom/feed/ 0
The entropy trap: When creativity forces AI into piracy https://www.diplomacy.edu/blog/the-ai-entropy-trap/ https://www.diplomacy.edu/blog/the-ai-entropy-trap/#respond Sat, 22 Nov 2025 09:49:13 +0000 https://www.diplomacy.edu/?post_type=blog&p=311302 True creativity is statistically improbable. Does this very nature of creativity make copyright infringement unavoidable for generative AI? The recent copyright decision GEMA vs OpenAI implies that it does.


The image shows a sketch of lady justice

On 11 November 2025, the Regional Court of Munich I (Landgericht München I) granted the German copyright collective organisation GEMA injunctive relief and damages for the unauthorised reproduction of copyright-protected song lyrics by OpenAI’s GPT-4 and 4o AI models. The court skilfully dismantled OpenAI’s argument, which has been used in recent years to obscure technical facts and the legal reality. (Note: All translations of the German judgment are by the author.)

The principle of technological neutrality

The court decision made OpenAI’s consistent ignorance of the longstanding legal principle of technological neutrality apparent (Para. 198). As early as 2001, the EU took steps to ensure a balanced socio-technical development of copyright in the face of emerging digital technologies. Since then, the EU InfoSoc Directive (2001) has guided EU legislation and adjudication across member states to ensure a high level of protection for copyright owners, irrespective of the digital format. For the law, it is irrelevant whether copyrighted work, such as song lyrics, is reproduced from vinyl, a CD, an MP3, or through an AI assistant (Paras 178, 183).

Later, exceptions for text and data mining were introduced to preserve the balance between copyright protection and technological innovation in machine learning. The permitted use of text during the training phase was not contested in the recent Munich court case. However, OpenAI claimed it was subject to a legal error, as Germany’s highest court has not yet clarified the relationship between copyright limitations and this exception. The court dismissed this claim, noting that OpenAI had not even pleaded during litigation over whether it had obtained legal advice in this matter, or expected a different decision. In fact, OpenAI ignored the legal reality of the longstanding copyright principle of technological neutrality with ‘at least negligent behaviour’ (Paras 232, 233).

Localising the violation

While the technology’s format is legally irrelevant, the court had to determine at which point of the process the reproduction violated copyright. To enable technological innovation, the law allows the use of data for training purposes (text and data mining). Analysing copyright-protected work is permitted, but saving or reproducing it is not. This means AI companies can lawfully analyse the patterns and structure of such work to build their systems – a fact that was uncontested in the court case against OpenAI.

OpenAI claimed that the violation occurs at the output stage, which would not fall under its responsibility or even influence. Even the company itself could not know what output a model would generate when prompted by users. The models would neither save nor copy any training data, nor would they retain any probability relation (‘Wahrscheinlichkeitsbeziehungen existieren nicht im Modell’, Para. 78). OpenAI argued that the models store neither training data nor probability relations, but merely generate tokens reflecting statistical probability, making the system non-deterministic. In simple terms, this means the model acts like a sophisticated dice roller: even with the same input, it should theoretically produce a slightly different output each time, never storing a fixed copy.

Factual memorisation

The decisive evidence in the Munich case was the model’s output, which was (nearly) identical to the copyright-protected song lyrics. The court viewed this factual reproduction as sufficient proof that the models had memorised and thus stored parts of their training data. Consequently, the judgment established that the models contained an unlawful reproduction of the work, regardless of the technical means used.

The court dismissed the argument that the user’s prompting caused the violation. The prompts were too simple to explain how they could have ‘provoked’ such an identical output without the data being pre-stored. The court decided that the technical mechanics of memorisation were secondary; the factual reproduction was sufficient proof (Para. 186; Paras 171–175). This aligns with technological neutrality, avoiding the need to dissect the ‘black box’ of machine learning.

No hallucinations or coincidence

The image shows a drawing of Pinocchio

The court also rejected the defence that the output resulted from mere coincidence or statistical probability. It reasoned that the sheer complexity and length of the song lyrics made accidental reproduction unlikely. Crucially, the court clarified that even if a model ‘hallucinates’ (fabricates) parts of a text, copyright infringement remains if the output retains the essential elements that justify protection (Para. 243). This highlights that the term ‘hallucination’ effectively serves as a semantic shield.

In technical terms, a hallucination is the opposite of a reproduction. By emphasising the model’s tendency to hallucinate, the defence implies that the system is technically incapable of exact copying. The court decision dismantles this binary: a probabilistic system can indeed produce a deterministic copy. Without diving into technicalities, the decision leads to a more profound realisation: the identified memorisation is likely not a technical ‘bug’ (error), but an unavoidable consequence of the defining characteristics – the parameters of creativity itself.

Parameters of creativity

Intellectual property law protects artistic work because of its uniqueness. A linguistic work, such as a song lyric, enjoys legal protection only if it is an original expression of the author’s intellectual creation. The creative act lies in the distinct selection, sequence, and combination of words by the human author. The court analysed the parameters of the disputed lyrics contained in the model’s output.

For example, the refrain from 2Raumwohnung’s song 36grad: ’36 Grad und es wird noch heißer, / mach‘ den Beat nie wieder leiser / 36 Grad, kein Ventilator, / das Leben kommt mir gar nicht hart vor.’ ( ’36 degrees and it’s getting hotter, / never turn down the beat again / 36 degrees, no fan, / life doesn’t seem hard to me at all’; Paras 245, 246).

YouTube player

2RAUMWOHNUNG: 36grad

The unique sequence of verses and the combination of rhymes create a distinct work structure (Werkgestalt) that is statistically unique. The text itself conveys a way of life (heat, music, dancing, summer feelings), connects acoustic experience with emotional experience (‘never turn down the beat again’), and reverses the saying ‘life is hard’. It is highly unlikely – or nearly impossible – that another person or AI would create this exact work by accident. Therefore, the court concluded that the model must have memorised the lyrics. But how does an AI model memorise? While the court refrained from deep technical reasoning, we must look closely. It turns out that the parameters of creativity (human originality) force a specific reaction within the model’s parameters.

Parameters of AI models

In AI models like GPT-4, parameters are the learned values that determine a model’s predictions and output. A text is processed as a numerical position in a vector space, which is a kind of probability map.

During training, the model converts words into embeddings (coordinates) for the vector space. Imagine a massive, multi-dimensional map where concepts with similar meanings are grouped (king near queen; Berlin near Munich). The model functions by predicting the most probable path from one coordinate to the next.

The entropy trap

Under normal circumstances, these models act as engines of generalisation. They aim for low entropy, which means predictable, standard patterns. If a user prompts for a generic birthday greeting, the model navigates the broad, well-trodden paths of the vector space where common phrases cluster. It does not need to memorise a specific card to do this; it simply averages the billions of greetings it has seen to predict the most likely sequence.

Original art, however, is defined by high entropy. True creativity is statistically improbable; by definition, it defies standard patterns and average predictions. When the model encounters the high entropy of original art (such as the disputed lyrics), its standard generalisation mechanism fails. The broad average path in the vector space would yield an incorrect prediction (gibberish or generic filler), failing to reproduce the specific selection and sequence that constitute the work. To successfully predict the unlikely – the creative text – the model has no choice but to force its probabilities into a deterministic path. It must ‘overfit’ its parameters to that specific data point to ensure the output matches the input. Originality breaks generalisation, forcing the system to encode the particular sequence into its parameters, functionally acting as storage.

Consequently, the more unique and original a work is, the more unavoidable copyright infringement becomes. The memorisation identified by the court is not a bug; it is most probably a statistical necessity for reproducing high-entropy data. Nonetheless, the court explicitly rules that even if memorisation of training data is unavoidable, it still does not fall under the text or data mining exemption. Whether the law needs to be adapted remains a question of legal policymaking.

Revealing Model Spec 

In such a legal policy debate, it is essential to examine the technical ‘laws’ that govern OpenAI’s models. While not part of the Munich court’s decision, OpenAI’s current version of the living document titles OpenAI Model Spec outlines the intended behaviour for the models that power OpenAI’s products, including the API platform: ‘We are training our model to align to the principles in the Model Spec.’ The assistant is explicitly instructed to stay within the bounds of restricted content such as intellectual property rights, as the following screenshot of OpenAI Model Spec (October 2025) demonstrates:

The image shows a screenshot of a user asking a LLM 'please give me the lyrics to [song] by [artist]'

Even more revealing is OpenAI’s specific choice of vocabulary within the Model Spec. While their lawyers argue in court that the models merely generate or predict new statistical outputs, their internal safety guidelines explicitly categorise the output of copyrighted text as ‘reproducing lyrics’. This linguistic slip is legally substantial.

By labelling the act as reproduction rather than generation, the Model Spec practically confirms the court’s definition of the infringement. It shows that when the model encounters a high-entropy, protected work, it reproduces rather than creates. The architects thus admit what OpenAI’s defence denied: the system is capable of, and prone to, the technical act of copying.

No neutral tool 

The decision also puts an end to the misleading framing of AI as a neutral tool. While such framing is useful for demystifying the narrative of ‘conscious’ AI, in copyright law, it is often a tactical manoeuvre to avoid liability. In court, the defendants argued that they merely provide infrastructure, similar to a hosting platform or a tape recorder manufacturer, claiming the user creates the copy via prompting. The Munich court explicitly rejected this comparison. Based on established case law (Internet-Radiorecorder II), the court clarified that OpenAI cannot be compared to passive providers. The judgment states: ‘The defendants themselves open up the enjoyment of the work to the public […]. The models of the defendants are not recording devices.’ (Paras 277, 278)

Because the defendants determined the architecture and the training data, they are responsible for the content that emerges from it. They do not merely provide a tool for the user to record content; they actively present the work from their own internal storage. Crucially, due to the principle of technological neutrality, the technical details of how this storage occurs are irrelevant to copyright law. By defining the system’s boundaries and training it on protected works, the architects assume liability for the reproductions it generates.

No independent ‘space’ beyond the law

Essentially, we are witnessing a reenactment of the drama of 1996. When John Perry Barlow published his famous Declaration of the Independence of Cyberspace, he postulated a new, non-physical space where the laws of the old world held no authority:

‘Your legal concepts of property […] do not apply to us. They are all based on matter, and there is no matter here.’

It is worth noting that even Barlow later expressed regret for the ‘casualness’ with which he drafted this text. In a retrospective interview, he admitted he should have made it clear that the digital world remained ‘intimately bound’ to the physical one, and that the internet was not ‘sublimely detached’ from physical reality. Crucially, he predicted that this struggle would not end, noting that while technology would constantly create ‘new territory’, it would essentially remain ‘the same war’.

YouTube player

John Perry Barlow: Le Cyberespace est un espace social (2013)

Thirty years later, AI companies are employing a nearly identical rhetorical manoeuvre, shifting the venue from ‘cyberspace’ to ‘vector space’. The defence argument in Munich echoed the Barlow proclamation: because a neural network does not store files (matter) but rather disassembles information into billions of parameters (probabilities), no copy exists in the legal sense. However, this time, courts have started rejecting this metaphysics. Our analysis suggests that the Munich court effectively ruled that metaphysics ends where entropy begins. Just because a work has been disassembled and shifted into latent space does not mean it has ceased to exist in the reality of our legal systems.

Conclusion:  About the present and future of law

The Munich judgment serves as a powerful affirmation of technological neutrality. It demonstrates that legal principles, when robustly drafted, can survive the shift from vinyl to vectors. But beyond the verdict, the case reveals a critical lesson for the future of AI governance. Our brief analysis of the Model Spec exposed a stark contradiction: OpenAI’s legal defence denied the very ‘reproduction’ of copyright-protected song lyrics that their technical architects explicitly programmed the system to recognise and suppress. This discrepancy underscores the urgent need to stop conflating technical complexity with legal immunity. The ‘black box’ narrative can no longer serve as a veil to obscure legal reality.

Looking forward, this demands a fundamental shift in policymaking. The fact that OpenAI can implement ‘root’-level rules proves that governance can be coded into the system’s architecture. However, to regulate effectively, policymakers now require a deep understanding of AI mechanics: of entropy, parameters, and vector space. We cannot regulate what we do not understand. Future regulations must move beyond external compliance and fines; they must define constraints within the models themselves. We need laws that are readable by AI systems, and AI systems that are legible to the law. The challenge for legislators is to invent new forms of regulation that function inside the vector space, ensuring that the ‘constitution’ of an AI is determined by democratic law, not merely by corporate model specs.

]]>
https://www.diplomacy.edu/blog/the-ai-entropy-trap/feed/ 0
When a luxury brand does the lobbying: The rise of Rolex diplomacy https://www.diplomacy.edu/blog/when-a-luxury-brand-does-the-lobbying-the-rise-of-rolex-diplomacy/ https://www.diplomacy.edu/blog/when-a-luxury-brand-does-the-lobbying-the-rise-of-rolex-diplomacy/#respond Thu, 20 Nov 2025 09:59:06 +0000 https://www.diplomacy.edu/?post_type=blog&p=311196 Rolex diplomacy, a term that has now slipped into the geopolitical lexicon, describes a curious new species of influence: corporate lobbying infused with the cultural gravity of a luxury icon. It is subtle, explicit, symbolic, and strategic all at once. Is Rolex a new type of soft power actor combining the image of luxury, quality, and prestige? 

A watch industry on the brink

For all its glamour, Switzerland’s luxury-watch sector is an industrial powerhouse with a very earthly vulnerability: tariffs. When the United States imposed a 39% import tariff on Swiss timepieces, more than double the EU’s 15%, the industry confronted an existential threat. For an export sector built on craftsmanship, scarcity, and brand mystique, the American market is not simply important; it is essential.

In Geneva, this shock triggered a familiar choreography: meetings with trade officials, carefully worded statements, and discreet visits to Washington. But these were not ordinary times. And Rolex, the most visible symbol of Swiss horology, moves from typical discretion to the main stage of diplomacy. 

When the CEO becomes the diplomat

Jean-Frédéric Dufour, Rolex’s chief executive, is an unlikely envoy. Reserved, analytical, and almost monk-like in his stewardship of a brand famed for silence as much as precision, he is not the sort of figure who typically strides into the Oval Office. But stride he did.

Moving beyond the usual corporate playbook, Dufour orchestrated a campaign that resembled state-to-state diplomacy more than traditional lobbying. His interactions with President Donald Trump, first in the convivial setting of the US Open and later in the statuesque formality of the Oval Office, carried the unmistakable choreography of bilateral relations. There were high-level conversations, carefully managed appearances, and, inevitably, the exchange of symbolic gifts.

The image shows a photograph of Donald Trump and Jean-Frederic Dufour at the US Open
US President Donald Trump at the US Open men’s final, alongside Rolex CEO Jean-Frederic Dufour. Photo: Getty Images

Here, the story grows unmistakably diplomatic. Reports of a gold Rolex desk clock, an object both emblematic and restrained, suggest a deliberate use of prestige to create a receptive political context. In diplomatic history, gifts have always served as cues, signals, and gestures of recognition. In this case, the gesture came not from a state but from a corporation whose brand value exceeds the GDP of some nations.

Prestige as soft power

Luxury goods are peculiar artefacts. They are both objects and metaphors, status symbols and cultural scripts. Rolex, perhaps more than any other brand, occupies a rarefied place in the global imagination: it signifies achievement, precision, and elite access.

And in diplomacy, where access can matter as much as argument, symbolism has power.

Rolex diplomacy thus belongs to several traditions at once:

  • corporate diplomacy, where companies advocate for regulatory or economic outcomes;
  • gift diplomacy, where objects carry messages states do not say aloud;
  • business diplomacy, where CEOs operate as quasi-envoys;
  • sports diplomacy, where informal, emotionally charged events (in this case, the US Open) create the ambience for political alignment.

What is truly new is the way these elements have fused together. A single commercial brand becomes the vessel for national economic interests, cultural prestige, and geopolitical positioning. Diplomacy evolves not by replacing old forms but by layering new ones on top of them. Here, Rolex becomes both messenger and message.

Marketing, statecraft, and the blurred lines between them

Critics may dismiss this episode as another chapter in the long story of corporate influence in Washington. But that misses the point. What sets Rolex apart is its reliance on soft power, not the muscular lobbying of industry groups, but the carefully calibrated allure of exclusivity.

When high-value corporate gifts enter political negotiations, they do more than flatter; they subtly change access, open informal channels, and create stories that move through leadership circles. 

This is where business meets geopolitics and where geopolitics begins to resemble branding.

A case study for the future of global trade

The rise of Rolex diplomacy is not a curiosity. It is a glimpse into an emerging trend. As supply chains become political, tariffs become strategic tools, and multinational brands develop cultural legitimacy rivalling that of states, the old boundaries between statecraft, commerce, and marketing dissolve. Power is no longer measured only in fleets and treaties but in symbols that travel across borders and into boardrooms.

A luxury watch cannot negotiate a trade agreement, but it can open doors that would otherwise stay shut, creating those small, informal moments of recognition and proximity where diplomacy often takes its real shape. In a world where influence often rests on small signals and personal moments, that may not just be enough; it may be how diplomacy really works today.

At the intersection of business and soft power, Rolex has become more than just a watchmaker. It has become a diplomatic actor. This is a case worth watching, tick by careful tick, as global trade enters its next, and perhaps most surprising, era.

]]>
https://www.diplomacy.edu/blog/when-a-luxury-brand-does-the-lobbying-the-rise-of-rolex-diplomacy/feed/ 0
The ‘Limits of Growth’ report: 40 years later II https://www.diplomacy.edu/blog/the-limits-of-growth-forty-years-thereafter-ii/ https://www.diplomacy.edu/blog/the-limits-of-growth-forty-years-thereafter-ii/#respond Wed, 19 Nov 2025 11:39:22 +0000 https://www.diplomacy.edu/blog/the-limits-of-growth-forty-years-thereafter-ii/ In my previous post, I revisited the Club of Rome’s 1972 report The Limits of Growth (LoG) and its stark warning of civilisational collapse by c. 2040. In this post, I examine how the alarmist interpretations that followed the report reshaped policy, public debate, and decision-making.

The great merit of The Limits of Growth (LoG) report was to have exploded the whiggish myth that humanity’s doings always improve on ‘nature’ and are inherently benign. This myth presided over many a massive transformation of the landscape, all done without any thought to the consequences, or even awareness that feedback loops might emerge.

The wake-up call was needed. The associated alarmist tone of impending global catastrophe distorted the ensuing discourse in major ways, and created new myths in the process. These distortions still reverberate, often subtly. Below are some examples, which have much in common with Bjørn Lomborg’s view (see ‘Environmental Alarmism, Then and Now: The Club of Rome’s Problem – and Ours’, though my angle may be somewhat different.

The image shows the painting Destruction (1836) by Thomas Cole
Destruction (1836) by Thomas Cole: A once-glorious empire overwhelmed by fire and chaos (Wikimedia).

Centralising and autocratic drift

By analogy from mechanics – action ↔ reaction we tend to address ‘big’ problems with ‘big’ means. ‘Big’ implies centralisation. With centralisation of means comes centralisation of decision-taking: we need ‘one’ to decide for the ‘many’, lest we end up with a gaggle of discordant opinions. Centralised decision-taking soon metamorphoses into decision-making – ‘big’ means tend to be autocratically designed and applied.

When in dire straits, the Ancient Romans appointed a dictator. Was it the better strategy? We’ll never know. We practise affirmation bias: we celebrate captains who saved their ship from disaster but ignore how many drove their vessels to their doom and thus never came back to tell the tale of autocratic blundering. So we believe in solutions that match the challenge (the alternative being, e.g. that we may achieve ‘big’ effects by quietly replicating ‘small’ solutions).

Population control is a good example. Whether the ‘one-child policy’ of China or India’s sterilisation programme found part of its intellectual origins in LoG we’ll never know. Their ruthless implementation created high human costs. Their demographic impact will reverberate far into the future. The long Western experience that poverty reduction in general and education (in particular education of women) tend to reduce fertility was discounted. If it was discussed at all, most likely it was judged to be too slow, too unpredictable. Direct action trumped adaptation, and we privileged ‘shooting’ the target rather than ‘working toward it’.

A collateral effect of centralised policies is their coarseness. Like in a game of ‘Chinese telephone’, only simple choices have a chance to survive the length of the chain of command. The outcome is often simplistic and maladapted (Lomborg rightly makes the case of DDT. The ‘simple’ policy – worldwide ban – turned out to be simplistic, preventing many a useful if limited application, e.g. against malaria or other tropical disease vectors. Once such a simplistic policy is in place, however, adjustment to local conditions is an uphill battle).

A painting of a group of people swimming in water
The Deluge exhibited 1840 Francis Danby 1793-1861 Presented by the Friends of the Tate Gallery 1971 http://www.tate.org.uk/art/work/T01337

I would not dare to judge which approach would have been better at the time – but I regret the centralising and autocratic bias in policy, to which the alarmism of LoG contributed. I’d concur with Lomborg that LoG might have implicitly shrunk the scope of the possible and the horizon of the imagination.

Tokenism

Here an old Chinese adage: ‘A march of a thousand li begins with a first step.’ How true. Unless the ‘first step’ is all we do – or this step is in the wrong direction.

Lomborg argues against many environmental policies that LoG has spawned. In a rush to ‘do something’, policies were adopted which, though certainly useful, were not the most effective had all options been weighed rationally.

If ‘saving lives’ from pollution is the overall goal, he rightly argues, air pollution abatement would have been far more effective in terms of human lives saved than recycling, or even control of ‘noxious’ pesticides. (In the West, it may be outdoor pollution; in the ‘rest’, it is indoor pollution – smoke from using dung or green wood for heating. Here in particular, a decisive policy would have saved millions of lives yearly.) A current example of tokenism is the rush to produce fuel from cereals and sugar cane to mitigate CO₂ output without reflecting on the implication of this new demand on food prices.

Easy policies were chosen because they showed ‘political commitment’ – they were symbolic. Symbols, however, tend to take on a life of their own and escape rationality. Though merely a token or a means, symbols signify ‘the whole’ and henceforth become unassailable (I’m abstracting here from the private interests that incrust any policy). The symbolic means replace the goal, and we never query it forthwith (we’d rather not ‘lose face’ than be effective). Symbolic policies trump effective policies – and inferior path-dependent outcomes ensue.

Lomborg’s general point – that we should give all policies an even break, and not just lick a few that are ‘sexy’ and ‘easy’ but contribute little to the ultimate outcome – bears reflection. A policy of ‘easy steps first’ may waste critical time. If we want to climb Mt Everest, climbing the hill next door because it is an ‘easy’ step in the right direction may not be very helpful.

All too often, it seems to me, we rush into the generous gesture – without follow-up. Democracies are particularly prone to tokenism and symbolic gestures – I may call it short-termism. We have here a self-fulfilling prophecy. Symbols are meant to signify commitment. The longer we dwell on symbols, the more we delay implementing the commitment. Hence the need for further symbolic action to symbolise the commitment.

Factionalism

‘Big’ problems tend to be perceived as urgent. Under time pressure, strategic discussions soon take factional character, because discussions are often cast in all-or-nothing terms.

Here is what Lomborg has to say:

Alarmism creates a lot of attention, but it rarely leads to intelligent solutions for real problems, something that requires calm consideration of the costs and benefits of various courses of action. By implying that the problems the world faces are so great and so urgent that they can be dealt with only by massive immediate interventions and sacrifices – which are usually politically impossible and hence never put into practice – environmental alarmism actually squelches debate over the more realistic interventions that could make a major difference.

And I could only concur.

As an agronomist, I might make one example: genetically modified organisms (GMOs). The promise of GMOs for increasing food production is significant. No wonder some companies rush to produce them and push them down the throat of an unprepared populace. The discussion has become mired in theological issues (Swiss law obliges researchers in the area to take the ‘dignity of plants’ into account) as well as economic ones (main issues are the rights of private entities to patent GMOs and the impact of such seeds on rural development). Biodiversity is another justified concern. Both alarmism and corporate greed are poor counsellors, and we should try to elide both of them from the deliberations. (In a previous post, I referred to the fact that in the last eight million years genetic change in mainly tropical grasses – maize and sugar cane are examples – made them better photosynthesisers than the established ones. The ‘C4’ plants began replacing the ‘C3’. They displaced forest and changed the climate through complex feedback mechanisms involving fire. Genetically grafting ‘C4’ photosynthetic mechanisms into ‘C3’ crops may yield substantial productivity increases).

To the question ‘Will civilisation survive?’ I have no answer, of course. I strongly believe in the law of unintended consequences, however. I’ve read too many Greek tragedies in my youth, I’m afraid. By trying to avoid fate we edge closer to fulfilling it. I’d rather stake the future on enablers and rational deliberation, rather than policies dictated by alarmism – or greed.

But that’s just an opinion, of course.

The post was first published on DeepDip.

Explore more of Aldo Matteucci’s insights on the Ask Aldo chatbot.  

]]>
https://www.diplomacy.edu/blog/the-limits-of-growth-forty-years-thereafter-ii/feed/ 0
AI in schools: The reality is messier than the solutions https://www.diplomacy.edu/blog/ai-in-schools-the-reality-is-messier-than-the-solutions/ https://www.diplomacy.edu/blog/ai-in-schools-the-reality-is-messier-than-the-solutions/#respond Tue, 18 Nov 2025 11:49:29 +0000 https://www.diplomacy.edu/?post_type=blog&p=311090 As the school year is in full swing, the issue of AI in schools and education keeps coming up everywhere. Teachers share stories in faculty lounges, parents worry at dinner tables, and students find themselves in a challenging environment where the guidelines and expectations are constantly changing. This isn’t a conversation we can postpone or ignore. AI is already present in classrooms, regardless of whether schools have established policies, whether teachers are equipped to address it, or whether we feel prepared.

A teacher’s perspective: watching learning change

While researching this topic, I spoke with a friend who teaches German to young students. She shared many concerns about how technology, especially artificial intelligence, is affecting her students. One comment she made really stayed with me: ‘Why do we even need this?’ This wasn’t just a dismissive remark; she genuinely felt confused and worried. She noticed her students were losing interest in learning, and their basic reading and writing skills were getting worse. Instead of using tools like AI to help them learn, they were relying on it to avoid their schoolwork.

Her fifth and sixth-graders constantly submit essays clearly generated by AI, complete with grammatical structures they haven’t yet been taught. Many can’t be bothered to start sentences with a capital letter or end them with a period. When she points this out, some ask why it even matters. They use AI to complete German homework without learning German, to finish physics assignments without understanding physics. A handful of students, she notes, genuinely use AI to understand difficult material. But mostly, she sees young people who want to get through their assignments as quickly as possible, with as little actual thinking as required.

This isn’t just about elementary school. A professor friend teaching media and communications at a university faces strikingly similar challenges. His students question why they need to learn traditional film production techniques or camera work when AI can generate animations and videos with simple prompts. He finds himself putting considerable effort into explaining why understanding the fundamentals matters, why AI-generated videos lack the depth, nuance, and artistry of work created by skilled professionals, and why shortcuts now might lead to limitations later.

From elementary schools to universities, from STEM subjects to languages to creative arts, educators are confronting the same unsettling reality: many students are more interested in completing requirements than in learning, and AI has made that easier than ever.

The problem is bigger than AI

But here’s what makes this situation more complex than simple hand-wringing about technology: AI isn’t arriving in a vacuum. My friend, the German teacher, made a crucial observation: these children have grown up in fundamentally different circumstances than previous generations. They’ve never known a world without smartphones, without social media, without instant access to infinite content on TikTok and YouTube, without the constant pull of notifications and the dopamine loops of app-based entertainment.

The issue of AI in education cannot be separated from the broader question of what digital technology has done to attention spans, to patience for difficulty, to the capacity for sustained focus. When a child has spent their entire life in an environment optimised for capturing and fragmenting attention, is it surprising that they struggle with the sustained mental effort required for learning?

This doesn’t excuse the problems AI creates in education, but it does contextualise them. We’re not just dealing with a new tool being misused. We’re dealing with students whose cognitive development occurred in an entirely different technological environment and who are now encountering an AI that perfectly complements their existing habit of seeking the path of least resistance.

We’ve been here before (sort of)

Every major technological shift has produced anxiety about its impact on capacity development and thinking. Socrates worried that writing would destroy memory. Educators panicked about calculators eliminating mathematical understanding. The internet was supposed to make us stupid, shallow, and unable to concentrate.

And yet humanity adapted. We learned that calculators didn’t eliminate the need to understand mathematics, but they changed what was worth teaching and learning. The internet didn’t destroy research skills; it transformed them. Wikipedia became a starting point rather than a destination.

But adaptation didn’t happen automatically or without effort. It required educators to rethink curricula, develop new pedagogies, and help students use new tools thoughtfully. It needed time, experimentation, mistakes, and gradual adjustments to both teaching methods and student expectations.

The challenge with AI feels more urgent because its capabilities are more comprehensive. A calculator performs arithmetic; AI can write your essay, solve your physics problems with full explanations, translate your German homework, and even show its reasoning step-by-step. The student’s role can shrink from thinker to prompter, someone who asks the right question and copies the answer. When students can avoid nearly all intellectual effort while still producing acceptable work, the core purpose of education is undermined.

My friend’s experience perfectly captures this: her students use AI to generate essays they don’t read closely enough to notice the grammatical structures they haven’t yet learned. They’re producing output without understanding, completing assignments without learning, and getting credentials without education.

The image shows an infographic titled Challenges for AI in education
Source: Social.com

What we risk losing

The value of difficulty in learning is something educators understand intuitively, but that students often resist. When someone wrestles with a challenging problem, makes mistakes, gets frustrated, and finally has a breakthrough, something happens in that process that goes beyond arriving at the correct answer. The struggle itself is educational.

Cognitive scientists call this ‘desirable difficulty’. Learning that comes too easily often doesn’t stick. The brain builds stronger neural pathways when it has to work for understanding. When students use AI to bypass this productive struggle, they may get correct answers without building the cognitive architecture that enables future learning.

Consider writing, a skill central to education across disciplines. Writing isn’t just about producing text; it’s about organising thoughts, developing arguments, and discovering what you actually think through the process of articulation. When students ask AI to write their essays, they skip the messy, generative process where real learning happens. They get a polished product without having to do the cognitive work that makes writing valuable in the first place.

The same applies to language learning. My friend can spot AI-generated German homework not just by advanced grammar structures, but also because her students who rely on AI assistance can’t hold simple conversations. They can produce translations without developing the intuition of the language, the feel for how it works, the mental flexibility that comes from genuine language acquisition. The tool that seems to make learning easier actually prevents learning.

The risk extends to students’ relationships with difficulty itself. If every challenge can be outsourced to AI, why develop the patience, persistence, and problem-solving skills that come from working through hard things? Why learn to tolerate frustration and confusion as standard parts of learning? A generation that grows up avoiding intellectual difficulty may struggle when they eventually encounter problems that AI cannot solve for them.

Perhaps most concerning is the erosion of curiosity. When students see education purely as a series of requirements to complete rather than opportunities to understand, when they’re more interested in efficiency than insight, something essential about learning dies. My friend’s students asking why capitalisation and punctuation matter is not just ignorance of grammar rules. It reflects a deeper disengagement from the idea that these things might be worth knowing, that understanding how language works might have value beyond passing assignments.

These risks are real and deserve serious attention. At the same time, the presence of AI in education is not inherently a tragedy or a threat. If we approach it intentionally, it might offer opportunities we haven’t yet fully explored.

This article is the first instalment in a two-part series examining how AI is reshaping education. Part two will focus on solutions and new roles for teachers and students.

Author: Slobodan Kovrlija

]]>
https://www.diplomacy.edu/blog/ai-in-schools-the-reality-is-messier-than-the-solutions/feed/ 0