Great Power Coordination
How can we coordinate action between the Great Powers to address existential and global catastrophic risks?

Interested in working on this research direction? Apply for our coaching

This profile is most likely to be relevant to students studying economics, history, law, political science (particularly international relations) and sociology, but we expect there to be valuable open research questions that could be pursued by students in other disciplines.

Why is this a pressing problem?

Existential and global catastrophic risks – such as the engineering of dangerous pathogens, the risks posed by increasingly powerful AI, and the possibility of nuclear war – cannot be addressed by countries in isolation. These are complex, global problems that require a cooperative, coordinated response by multiple countries – especially the great powers of China, the US, India and Russia – and across multiple levels, including governments, research labs, non-state actors, and individuals.

To identify effective strategies for coordination, there needs to be a good understanding of the diverse perspectives, cultural norms, incentives and institutions in different countries, as well as potential areas of common ground. This requires further research on existential and global catastrophic risks in the context of multiple great power countries.

Research on the relationship between the US and China is especially vital because of the economic, technological and military power of these two countries. Recently, this topic has been highlighted as a neglected and pressing problem by 80,000 Hours (see Improving China-Western Coordination & China-related AI safety and governance paths). According to power transition theory, there is greater potential for conflict between the US & China as the power of the challenger nation (China) reaches that of the established nation (US). However, it is also important to understand and evaluate evolving dynamics between all the great powers.

Watch the talk below from Brian Tse to learn more about the need for AI safety coordination between the US & China.


Explore existing research

  • Concordia Consulting works to promote the safe and responsible development of AI, in part by coordinating Chinese and Western stakeholders on AI safety.
  • The Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford. Academics at FHI bring the tools of mathematics, philosophy and social sciences to bear on big-picture questions about humanity and its prospects.
  • The Centre for the Governance of AI is a research organisation focused on helping humanity transition to a world with advanced AI.
  • The Center for Security and Emerging Technology is a research organisation focused on the effects of progress in AI, advanced computing and biotechnology.
  • The Nuclear Threat Initiative collaborates with governments and organisations to reduce nuclear and biological threats.

Find a thesis topic

If you’re interested in working on this research direction, below are some ideas on what would be valuable to explore further. If you want help refining your research ideas, apply for our coaching!

Depending on the options your degree programme offers, you might need to prioritise either building expertise on a particular global problem first, or building your knowledge of the institutions and culture of a specific country or countries, before doing research at the intersection of these topics.

Most of the research directions we list under existential and global catastrophic risks could be usefully explored through the lens of great power coordination. The profiles below are particularly relevant:

Some research questions at the intersection of these profiles and great power cooperation are below. 

  • “How substantial of an advantage does China have, as compared with other advanced developed (mostly liberal democratic) countries, in its ability to channel its large economy, collect and share citizen data, and exclude competitors? What steps could and would the U.S. take to reinforce its lead? What are the possibilities and likely dynamics of an international economic AI race?” (from AI Governance Agenda by Allan Dafoe)
  • Exploratory research which seeks to better understand the current state of AI research and development in China, India or Russia (inputs, capabilities, and performance). (adapted from AI Governance Agenda by Allan Dafoe)
  • “What economic mechanisms (e.g. auction or sharing) would be best for allocating space in low-Earth orbit when it becomes much more valuable and crowded? (Potentially relevant analogies include deep-sea mining and sharing arrangements for long-range electromagnetic waves used by satellites.” (80000 Hours)
  • How can game theory be applied to explain and predict great power cooperation over providing international public goods? See Scott Barrett’s book ‘Why Cooperate?‘ for an example of work in this area.
  • “One set of possibilities for avoiding an AI arms race is the use of third party standards, verification, enforcement, and control…What are the prospects that great powers would give up sufficient power to a global inspection agency or governing body? What possible scenarios, agreements, tools, or actions could make that more plausible? What do we know about how to build government that is robust against sliding into totalitarianism and other malignant forms? What can we learn from similar historical episodes, such as the failure of the Acheson-Lilienthal Report and Baruch Plan, the success of arms control efforts that led towards the 1972 Anti-Ballistic Missile (ABM) Treaty, and episodes of attempted state formation?” (Allan Dafoe’s research agenda)
  • “The effectiveness of specific programs, including Track II diplomacy and academic, business or scientific exchanges.” (Founders Pledge report)
  • “The relationship between international rivalries and technological development.” (Founders Pledge report)
  • “How great are the dangers from [an AI race (between the US & China)], how can those dangers be communicated and understood, and what factors could reduce or exacerbate them?” (from AI Governance Agenda by Allan Dafoe)
  • “What routes exist for avoiding or escaping the [AI] race, such as norms, agreements, or institutions regarding standards, verification, enforcement, or international control? How much does it matter to the world whether the leader has a large lead-margin, is (based in) a particular country (e.g. the US or China), or is governed in a particular way (e.g. transparently, by scientists)?” (from AI Governance Agenda by Allan Dafoe)
  • “Comparative law may offer insights on potential gaps and more effective measures, yet little research exists comparing biosafety governance in different countries, let alone the relative effectiveness of different strategies. What laws and regulations exist in different countries to minimize accident risks? To what extent have they been implemented in practice? How might their effectiveness be measured, and what uncertainties exist in such an analysis? What do they reflect about biosafety norms?” (Legal Priorities Project)
  • “Regulating the use of weapons in outer space: Current international laws provide insufficient regulations to prevent an arms race in outer space. It could be useful to explore the viability of potential solutions, such as confidence-building and security-building measures, politically binding codes of conduct, and international prohibitions of weapons in space, as well as exploring how projects such as the Woomera Manual and MILAMOS project can best contribute to the long-term peace of space exploration.” (Legal Priorities Project)
  • “Should we be worried about a regulatory ‘race to the bottom,’ where countries compete to be the most favourable places to register private space companies and spacecraft? (Similar to the phenomenon of flags of convenience.)” (80000 Hours)
  • “What plausible paths exist towards limiting or halting the development and/or deployment of autonomous weapons? Is limiting development desirable on the whole? Does it carry too much risk of pushing development underground or toward less socially-responsible parties?” (80,000 Hours)
  • “How might AI alter power dynamics among relevant actors in the international arena? (great and rising powers, developed countries, developing countries, corporations, international organizations, militant groups, other non-state actors, decentralized networks and movements, individuals, and others).” (Center for a New American Security)
  • “Researching how to change the mindset in countries around the world to incentivise greater transparency and data sharing about infectious outbreaks. Good governance during pandemics requires coordinated responses between countries, but there are many incentives pushing countries in the direction of being secretive about their data. This prevents other countries effectively preparing or collaborating on stopping pandemics in the first place.” (Improving pandemic governance profile)
  • “How does the Chinese government shape its technology policy? What attitudes does it have towards AI (including AI safety), synthetic biology, and regulation of emerging technology?” (From 80000 Hours, adapted from Brian Tse and Ben Todd, A new recommend career path for effective altruists: China specialist)
  • “The causes of war and drivers of peace, including international trade, international institutions, and cultural and scientific exchanges.” (Founders Pledge report)
  • “How can we best increase communication and coordination within the AI safety community? What are the major constraints that safety faces on sharing information (in particular ones which other fields don’t face), and how can we overcome them?” (Technical AI Safety Research outside of AI)
  • “What are Chinese computer scientists’ views on AI progress and the importance of work on safety? (You might try running a survey similar to this one from 2016, but focusing on AI experts in China.)” (From 80000 Hours, adapted from Brian Tse and Ben Todd, A new recommend career path for effective altruists: China specialist)

Further resources

We have listed a few resources below but encourage you to explore our related profiles for further resources.

To learn more about Sino-Western Coordination, see 80,000 Hours’ post Improving China-Western Coordination on Global Catastrophic Risks and China-related AI safety and governance paths. To learn more about how to build a career related to great power coordination, see 80000 Hours’ post Specialist in Emerging Global Powers.

If you’re interested in working on this research direction, apply for our coaching and we can connect you with researchers already working in this space, who can help you refine your research ideas.

You can also apply to join our community if you’re interested in peer connections with others working in this area. 

Apply for our database of potential supervisors if you’re looking for formal supervision and take a look at our advice on finding a great supervisor for further ideas.

While you should probably prioritise building your research skills and credentials over spending time in a new culture as part of your degree programme, all else equal doing so will likely be useful. The scholarships and degrees below offer the opportunity to spend time in another country during your degree.

Our funding database can help you find potential sources of funding if you’re a PhD student interested in this research direction.

  • ChinAI Newsletter: Jeffrey Ding’s weekly translations of writings from Chinese thinkers on China’s AI landscape. 
  • Sign up for our newsletter to hear about opportunities such as funding, internships and research roles.


This profile was first published 5/12/22 and last updated 22/12/22. Thanks to Hana McMahon-Cole for writing this profile. Thanks to Jenny Xiao and Kwan Yee Ng for helpful feedback. All errors remain our own.

Subscribe to the Future Researchers Newsletter

Subscribe to our Future Researchers Newsletter for key concepts, resources and news related to changing the world with your thesis and long-term research career.

Where next?

Keep exploring our other services and content