RLHF Roundup: Trying to get good at PPO, charting RLHF's impact, RewardBench retrospective, and a reward model competition

Interconnects - A podcast by Nathan Lambert

Categories:

Things to be aware of if you work on language model fine-tuning.This is AI generated audio with Python and 11Labs.Source code: https://github.com/natolambert/interconnects-toolsOriginal post: https://www.interconnects.ai/p/rlhf-roundup-202400:00 RLHF Roundup: Trying to get good at PPO, charting RLHF's impact, RewardBench retrospective, and a reward model competition04:32 How big is the impact of RLHF relative to pretraining?05:54 RewardBench retrospective after 100 models and 90% peak accuracy09:19 LMSYS's reward modeling competitionFig 1: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/rlhf-roundup/img_009.pngFig 2: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/rlhf-roundup/img_012.pngFig 3: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/rlhf-roundup/img_017.pngFig 4: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/rlhf-roundup/img_026.png Get full access to Interconnects at www.interconnects.ai/subscribe

Visit the podcast's native language site