An AI safety plan is a description of proposed actions people could take in the world to make AI go well. This is NOT supposed to be a political agenda, and we hope it to avoid turning into one. Although it may be inevitable we have to take some political actions. This is kind of described in section 1 of https://www.bridgespan.org/getmedia/16a72306-0675-4abd-9439-fbf6c0373e9b/strong-field-framework.pdf [TODO: we haven’t described this super clearly - need to expand on this for clarity].

Some SAFE properties for this plan (the mnemonic is not too forced, I promise):

Why do we need criteria?

Earlier notes

Criteria

Notes on Good Strategy Bad Strategy: