EA - ML Safety Scholars Summer 2022 Retrospective by ThomasW

The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund

Podcast artwork

Categories:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ML Safety Scholars Summer 2022 Retrospective, published by ThomasW on November 1, 2022 on The Effective Altruism Forum.TLDRThis is a report on the Machine Learning Safety Scholars program, organized over the summer by the Center for AI Safety. 63 students graduated from MLSS: the list of graduates and final projects is here. The program was intense, time-consuming, and at times very difficult, so graduation from the program is a significant accomplishment.Overall, I think the program went quite well and that many students have noticeably accelerated their AI safety careers. There are certainly many areas of improvement that could be made for a future iteration, and many are detailed here. We plan to conduct followup surveys to determine the longer-run effects of the program.This post contains three main sections:This TLDR, which is meant for people who just want to know what this document is and see our graduates list.The executive summary, which includes a high-level overview of MLSS. This might be of interest to students considering doing MLSS in the future or anyone else interested in MLSS.The full report, which was mainly written for future MLSS organizers, but I’m publishing here because it might be useful to others running similar programs.The report was written by Thomas Woodside, the project manager for MLSS. “I” refers to Thomas, and does not necessarily represent the opinion of the Center for AI Safety, its CEO Dan Hendrycks, or any of our funders.Visual Executive SummaryMLSS OverviewMLSS was a summer program for mostly undergraduate students aimed to teach the foundations of machine learning, deep learning, and ML safety. The program ended up being ten weeks long and included an optional final project. It incorporated office hours, discussion sections, speaker events, conceptual readings, paper readings, written assignments, and programming assignments. You can see our full curriculum here.Survey ResultsAll MLSS graduates filled out an anonymous survey. What follows is a mostly visual depiction of the program through the lens of these survey results.Overall Experience in MLSSOf course, this sample is biased towards graduates of MLSS, since we required them to complete the survey to receive their final stipends (and non graduates didn’t get final stipends). However, it seems clear from the way that people responded to this survey that the majority had quite positive opinions of our program.We also asked students about their future plans:The results suggest that many students are actively trying to work in AI safety and that their participation in MLSS helped them become more confident in that choice. MLSS did decrease some students’ desire to research AI safety. We do not think this is necessarily a bad thing, as many students were using MLSS to test fit; presumably, some are not great fits for AI safety research but might be able to contribute in some other way.We asked students about why they chose to do the program:We conclude that the stipend was extremely useful to students, and allowed many students to complete the program who wouldn’t have otherwise been able to. Nearly all graduates said they were interested in learning about ML safety in particular.We asked students about the quality of the support from their TAs:We asked students a few questions about what they thought about AI x-risk:Lastly, we asked how many hours students spent in the course:Concluding ThoughtsTo a large extent, we think these results speak for themselves. Students said they got a lot out of MLSS, and I believe many of them are very interested in pursuing AI safety careers that have been accelerated by the program. The real test of our program, of course, will come when we survey students in the future to see what they are doing and how the course helped...

Visit the podcast's native language site