“Rolling Thresholds for AGI Scaling Regulation” by Larks
EA Forum Podcast (All audio) - A podcast by EA Forum Team
Categories:
This is a plan for how ASI could be relatively safely developed. Abstract: A plan that puts all frontier model companies on a unified schedule of model training, evaluation and approval, with regulatory compliance promoted through market access. This aims to combine (most of) the economic benefits of unrestricted competition but with more safety, (most of) the time-to-think benefits of AI pauses but with better compliance incentives, and (most of) the central oversight of a Manhattan project but with more freedom and pluralism. Background It is based on the following worldview, though not all are cruxes: Rushing to ASI by default leads to deceptive and misaligned ASI, with catastrophic consequences for humanity. A lot of alignment work will be empirical and requires access to (near) cutting edge models to work with. A lot of progress is driven by increased compute and it is possible to measure compute in [...] ---Outline:(00:39) Background(02:23) The Plan(04:51) Advantages(07:12) Potential problems(08:06) Do the frontier training runs have to be simultaneous?(08:51) Quis custodiet ipsos custodes?(10:13) How does this work internationally?(12:19) To be determinedThe original text contained 1 image which was described by AI. --- First published: January 12th, 2025 Source: https://forum.effectivealtruism.org/posts/aKvcNc2DbgAvXHtcw/rolling-thresholds-for-agi-scaling-regulation --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.