550 Episoade

  1. LLM-based Conversational Recommendation Agents with Collaborative Verbalized Experience

    Publicat: 23.08.2025
  2. Signal and Noise: Evaluating Language Model Benchmarks

    Publicat: 23.08.2025
  3. Breaking Feedback Loops in Recommender Systems with Causal Inference

    Publicat: 21.08.2025
  4. RAG is Dead, Context Engineering is King: Building Reliable AI Systems

    Publicat: 20.08.2025
  5. A Survey of Personalization: From RAG to Agent

    Publicat: 20.08.2025
  6. Facilitating the Adoption of Causal Infer-ence Methods Through LLM-Empowered Co-Pilot

    Publicat: 19.08.2025
  7. Performance Prediction for Large Systems via Text-to-Text Regression

    Publicat: 16.08.2025
  8. Sample More to Think Less: Group Filtered Policy Optimization for Concise Reasoning

    Publicat: 15.08.2025
  9. DINOv3: Vision Models for Self-Supervised Learning

    Publicat: 15.08.2025
  10. Agent Lightning: Training Any AI Agents with Reinforcement Learning

    Publicat: 14.08.2025
  11. Computational-Statistical Tradeoffs at the Next-Token Prediction Barrier

    Publicat: 14.08.2025
  12. From Model Weights to Agent Workflows: Charting the New Frontier of Optimization in Large Language Models

    Publicat: 12.08.2025
  13. Is Chain-of-Thought Reasoning a Mirage?

    Publicat: 12.08.2025
  14. Agentic Web: Weaving the Next Web with AI Agents

    Publicat: 11.08.2025
  15. The Assimilation-Accommodation Gap in LLM Intelligence

    Publicat: 10.08.2025
  16. The Minimalist AI Kernel: A New Frontier in Reasoning

    Publicat: 06.08.2025
  17. Statistical Rigor for Interpretable AI

    Publicat: 06.08.2025
  18. Full-Stack Alignment: Co-Aligning AI and Institutions with Thick Models of Value

    Publicat: 04.08.2025
  19. A foundation model to predict and capture human cognition

    Publicat: 04.08.2025
  20. Generative Recommendation with Semantic IDs: A Practitioner’s Handbook

    Publicat: 04.08.2025

7 / 28

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.

Visit the podcast's native language site