528 Episoade

  1. Stratified Prediction-Powered Inference for Hybrid Language Model Evaluation

    Publicat: 09.05.2025
  2. Accelerating Unbiased LLM Evaluation via Synthetic Feedback

    Publicat: 09.05.2025
  3. Prediction-Powered Statistical Inference Framework

    Publicat: 09.05.2025
  4. Optimizing Chain-of-Thought Reasoners via Gradient Variance Minimization in Rejection Sampling and RL

    Publicat: 09.05.2025
  5. RM-R1: Reward Modeling as Reasoning

    Publicat: 09.05.2025
  6. Reexamining the Aleatoric and Epistemic Uncertainty Dichotomy

    Publicat: 08.05.2025
  7. Decoding Claude Code: Terminal Agent for Developers

    Publicat: 07.05.2025
  8. Emergent Strategic AI Equilibrium from Pre-trained Reasoning

    Publicat: 07.05.2025
  9. Benefiting from Proprietary Data with Siloed Training

    Publicat: 06.05.2025
  10. Advantage Alignment Algorithms

    Publicat: 06.05.2025
  11. Asymptotic Safety Guarantees Based On Scalable Oversight

    Publicat: 06.05.2025
  12. What Makes a Reward Model a Good Teacher? An Optimization Perspective

    Publicat: 06.05.2025
  13. Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems

    Publicat: 06.05.2025
  14. Identifiable Steering via Sparse Autoencoding of Multi-Concept Shifts

    Publicat: 06.05.2025
  15. You Are What You Eat - AI Alignment Requires Understanding How Data Shapes Structure and Generalisation

    Publicat: 06.05.2025
  16. Interplay of LLMs in Information Retrieval Evaluation

    Publicat: 03.05.2025
  17. Trade-Offs Between Tasks Induced by Capacity Constraints Bound the Scope of Intelligence

    Publicat: 03.05.2025
  18. Toward Efficient Exploration by Large Language Model Agents

    Publicat: 03.05.2025
  19. Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT

    Publicat: 02.05.2025
  20. Self-Consuming Generative Models with Curated Data

    Publicat: 02.05.2025

19 / 27

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.

Visit the podcast's native language site