Best AI papers explained
A podcast by Enoch H. Kang
550 Episoade
-
Instacart's Economics Team: A Hybrid Role in Tech
Publicat: 31.03.2025 -
Data Mixture Optimization: A Multi-fidelity Multi-scale Bayesian Framework
Publicat: 31.03.2025 -
Why MCP won
Publicat: 31.03.2025 -
SWEET-RL: Training LLM Agents for Collaborative Reasoning
Publicat: 31.03.2025 -
TheoryCoder: Bilevel Planning with Synthesized World Models
Publicat: 30.03.2025 -
Driving Forces in AI: Scaling to 2025 and Beyond (Jason Wei, OpenAI)
Publicat: 29.03.2025 -
Expert Demonstrations for Sequential Decision Making under Heterogeneity
Publicat: 28.03.2025 -
TextGrad: Backpropagating Language Model Feedback for Generative AI Optimization
Publicat: 27.03.2025 -
MemReasoner: Generalizing Language Models on Reasoning-in-a-Haystack Tasks
Publicat: 27.03.2025 -
RAFT: In-Domain Retrieval-Augmented Fine-Tuning for Language Models
Publicat: 27.03.2025 -
Inductive Biases for Exchangeable Sequence Modeling
Publicat: 26.03.2025 -
InverseRLignment: LLM Alignment via Inverse Reinforcement Learning
Publicat: 26.03.2025 -
Prompt-OIRL: Offline Inverse RL for Query-Dependent Prompting
Publicat: 26.03.2025 -
Alignment from Demonstrations for Large Language Models
Publicat: 25.03.2025 -
Q♯: Distributional RL for Optimal LLM Post-Training
Publicat: 18.03.2025 -
Scaling Test-Time Compute Without Verification or RL is Suboptimal
Publicat: 14.03.2025 -
Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning
Publicat: 14.03.2025 -
Optimizing Test-Time Compute via Meta Reinforcement Fine-Tuning
Publicat: 14.03.2025 -
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Publicat: 14.03.2025 -
Revisiting Superficial Alignment Hypothesis
Publicat: 14.03.2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
