The end of the "best open LLM"

Interconnects - A podcast by Nathan Lambert

Categories:

Modeling the compute versus performance tradeoff of many open LLMs.This is AI generated audio with Python and 11Labs.Source code: https://github.com/natolambert/interconnects-toolsOriginal post: https://www.interconnects.ai/p/compute-efficient-open-llms0:00 The end of the "best open LLM"3:05 Compute efficient open LLMsFig 1: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_004.jpegFig 2: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_009.pngFig 3: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_014.pngFig 4: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_016.pngFig 5: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_018.pngFig 6: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_020.pngFig 7: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_022.pngFig 8: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_024.pngFig 9: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_028.png Get full access to Interconnects at www.interconnects.ai/subscribe

Visit the podcast's native language site