In January, I spent over 100 hours:
This list of routes to AI safety success (updated June 2025) is a key output of that research. It describes the main paths to success in AI safety. I’d guess that understanding the strategies in this document would place you in the top 10th percentile of people at AI safety strategy in the AI safety community.

Get “good” international coalitions to be the first people to develop advanced AI, and do so safely. They then would use this AI model to shift the world into a good state: for example by accelerating one of the other paths below.
CERN for AI is a common model for this. These proposals usually envision shared infrastructure and expertise for frontier AI research, combining the efforts of many nation states together.
This can be approached from two angles:
Resources: