- To light
- Evals to identify AI risks
- Using eval results to rally support for a slow down
- Global cooperation in body ‘AWSAI’ to safely deploying strong AI
- AI used to create life extension treatments. Schism forms between people who use life extension (the ‘tempered’) and those who don’t.
- People are happy in small groups with similar interests (cliques)
- Land value taxes
- Peace through prophecy
- Governments use prediction markets, liquid democracy and AI research tools to make wiser decisions and capably execute big projects.
- Positive feedback loop as people copy countries that do this successfully.
- Flash minor war (a few ships sunk) due to feedback loop in military AI between US and China.
- Global agreement on AI (delphi accords) to tackle misalignment. Spurred by AI tools helping governments make better decisions, and by pressure after flash war (which acted like a fire alarm event).
- Core central
- Risks from AI became more apparent, and so funding increased for control, alignment and explainability. A few runaway AI scares encouraged global cooperation, including regulation requiring AI systems to be aligned and explainable. Nations not agreeing were sanctioned.
- The first AGI develops a ‘Core’ system which countries adopt for governance
- [I don’t know what core is after trying to read this for a while; I gave up]
- [notes]
- This was less helpful than I thought it would be