At BlueDot Impact, we’ve been widely surveying the literature for existing AI safety plans. This document contains our notes on the most relevant articles, after reading almost everything we could find that seemed potentially relevant (for a more complete list of everything we’ve considered, see here‣). Our overall conclusion is that unfortunately there is no existing public strategy that meets our ‣.

The notes within documents are summaries/hot takes, and not necessarily ordered/prioritised by importance.

Most useful to read

Maybe useful to read