“AI Safety’s Talent Pipeline is Over-optimised for Researchers” by Christopher Clay
EA Forum Podcast (All audio) - A podcast by EA Forum Team

Categories:
Thank you to all the wonderful people who've taken the time to share their thoughts with me. All opinions are my own: Will Aldred, Jonah Boucher, Deena Englander, Dewi Erwan, Bella Forristal, Patrick Gruban, William Gunn, Tobias Häberli, James Herbert, Adam Jones, Michael Kerrison, Schäfer Kleinert, Chris Leong, Cheryl Luo, Sobanan Narenthiran, Alicia Pollard, Will Saunter, Nate Simmons, Sam Smith, Chengcheng Tan, Simon Taylor, Ben West, Peter Wildeford, Jian Xin.Executive Summary There is broad consensus that research is not the most neglected career in AI Safety, but almost all entry programs are targeted at researchers. This creates a number of problems: People who are tail-case at research are unlikely to be tail-case in other careers. Researchers have a bias in demonstrating ‘value alignment’ in hiring rounds. Young people trying to choose careers have a bias towards aiming for research.IntroductionWhen I finished the Non-Trivial Fellowship, I was [...] ---Outline:(00:45) Executive Summary(01:17) Introduction(03:12) We Need Non-Research AI Safety Talent(05:01) Most Talent Pipelines are for AI Safety Research(05:53) This Creates the Wrong Filter for Non-Talent Roles(07:03) This Creates a Feedback Loop of Status(07:48) Research Fellowships have a Bias in Hiring(08:55) Conclusion(09:32) FAQ(10:29) Further Questions(11:19) Bycatch; Addendum--- First published: August 30th, 2025 Source: https://forum.effectivealtruism.org/posts/m5dDrMfHjLtMu293G/ai-safety-s-talent-pipeline-is-over-optimised-for --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.