Stop "reinventing" everything to "solve" alignment
Interconnects - A podcast by Nathan Lambert
Categories:
Integrating some non computing science into reinforcement learning from human feedback can give us the models we want.This is AI generated audio with Python and 11Labs.Source code: https://github.com/natolambert/interconnects-toolsOriginal post: https://www.interconnects.ai/p/reinventing-llm-alignment0:00 Stop "reinventing" everything to "solve" AI alignment2:19 Social Choice for AI Alignment: Dealing with Diverse Human Feedback7:03 OLMo 1.7 7B: A truly open model with actually good benchmarksFig 1: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/reinvention/img_013.pngFig 2: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/reinvention/img_015.pngFig 3: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/reinvention/img_018.pngFig 4: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/reinvention/img_024.pngFig 5: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/reinvention/img_027.png Get full access to Interconnects at www.interconnects.ai/subscribe