Best AI papers explained
A podcast by Enoch H. Kang
528 Episoade
-
Stratified Prediction-Powered Inference for Hybrid Language Model Evaluation
Publicat: 09.05.2025 -
Accelerating Unbiased LLM Evaluation via Synthetic Feedback
Publicat: 09.05.2025 -
Prediction-Powered Statistical Inference Framework
Publicat: 09.05.2025 -
Optimizing Chain-of-Thought Reasoners via Gradient Variance Minimization in Rejection Sampling and RL
Publicat: 09.05.2025 -
RM-R1: Reward Modeling as Reasoning
Publicat: 09.05.2025 -
Reexamining the Aleatoric and Epistemic Uncertainty Dichotomy
Publicat: 08.05.2025 -
Decoding Claude Code: Terminal Agent for Developers
Publicat: 07.05.2025 -
Emergent Strategic AI Equilibrium from Pre-trained Reasoning
Publicat: 07.05.2025 -
Benefiting from Proprietary Data with Siloed Training
Publicat: 06.05.2025 -
Advantage Alignment Algorithms
Publicat: 06.05.2025 -
Asymptotic Safety Guarantees Based On Scalable Oversight
Publicat: 06.05.2025 -
What Makes a Reward Model a Good Teacher? An Optimization Perspective
Publicat: 06.05.2025 -
Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
Publicat: 06.05.2025 -
Identifiable Steering via Sparse Autoencoding of Multi-Concept Shifts
Publicat: 06.05.2025 -
You Are What You Eat - AI Alignment Requires Understanding How Data Shapes Structure and Generalisation
Publicat: 06.05.2025 -
Interplay of LLMs in Information Retrieval Evaluation
Publicat: 03.05.2025 -
Trade-Offs Between Tasks Induced by Capacity Constraints Bound the Scope of Intelligence
Publicat: 03.05.2025 -
Toward Efficient Exploration by Large Language Model Agents
Publicat: 03.05.2025 -
Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT
Publicat: 02.05.2025 -
Self-Consuming Generative Models with Curated Data
Publicat: 02.05.2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
