• Forecasting is hard
    • Most AI experts are not forecasting experts
    • Forecasting experts struggle given limited reference class data
    • Training AI experts to be better forecasters, and forecsters more about AI didn’t help
  • Scenario planning is maybe a better alternative
    • Identify plausible futures (scenarios)
      • Can vary these along strategic parameters (e.g. time to TAI, takeoff speed, difficulty of alignment, paradigm of TAI). Also known as variables or dimensions.
        • This can blow up to many different scenarios. E.g with 10 parameters of 5 values each, you have 5^10 = 9,765,625 different scenarios!
          • Can evaluate each parameter separately
          • Can focus on most relevant clusters of parameters (Kilian et al. did this with experts)
          • Can eliminate implausible combinations of values
          • Can focus on most relevant (e.g. for the actor planning to use the strategy), most severe or most likely parameter values
            • Many parameters affect each other and the overall risk in predictable ways, so can build model to identify most severe scenarios (Clarke et al.)
        • Potential parameters
          • Time to TAI
          • Takeoff speed
          • Alignment difficulty
          • Key inputs: is data, compute or algorithms the decisive factor?
          • Number of actors
          • Types of actors: private companies, nation states, world coalitions?
          • Relationship between actors: competitive (e.g. race) or cooperative
          • Diffusion of models
          • Paradigm or architecture of TAI
          • Primary risk class: e.g. misuse, misalignment, malfunction
          • Region: where is TAI first developed?
          • Corporate governments: how robust is governance at AI companies?
  • Categorizing risks
    • CAIS: malicious use, AI race, organizational risks, rogue AIs
    • Kilian et al.: intentional, structural, accidents and agential
    • Vold and Harris: accidental, structural and misuse
    • Anthroic RSP: misuse, autonomy & replication
    • Non-directly-x-risk threats relevant
      • May exacerbate x-risks, e.g. disinformation hindering responding to an x-risk
      • May reduce x-risks, e.g. warning shot that makes moratorium more possible
  • Discusses theories of victory by Hobbhan et al. and Rauker and Aird. Has a good summary of those resources in the article already, so I won’t reproduce them here.