Best AI papers explained
A podcast by Enoch H. Kang
528 Episoade
-
NoWag: Unified Compression for Large Language Models
Publicat: 26.04.2025 -
Optimal Tool Calls in Language Model Reasoning
Publicat: 26.04.2025 -
Data Selection for Empirical Risk Minimization
Publicat: 26.04.2025 -
LoRe: Low-Rank Reward Modeling for Personalized LLMs
Publicat: 26.04.2025 -
ParaPO: Reducing Language Model Verbatim Reproduction
Publicat: 26.04.2025 -
Test-Time RL: Self-Evolving LLMs via Majority Voting Rewards
Publicat: 25.04.2025 -
Tina: Tiny LoRA Reasoning Models
Publicat: 25.04.2025 -
Evaluating large language models in theory of mind tasks
Publicat: 25.04.2025 -
QUEST: Quality Sampling for Machine Translation
Publicat: 24.04.2025 -
Offline Preference Learning via Simulated Trajectory Feedback
Publicat: 24.04.2025 -
Reasoning Elicitation in Language Models via Counterfactual Feedback
Publicat: 24.04.2025 -
Eliciting Human Preferences with Language Models
Publicat: 24.04.2025 -
Sub-Optimal Data for Human-in-the-Loop Reinforcement Learning
Publicat: 24.04.2025 -
γ-Bench: Evaluating LLMs in Multi-Agent Games
Publicat: 24.04.2025 -
DRAFT: Self-Driven LLM Tool Mastery via Documentation Refinement
Publicat: 24.04.2025 -
Optimal Prediction Sets for Enhanced Human-AI Accuracy
Publicat: 24.04.2025 -
Self-Correction via Reinforcement Learning for Language Models
Publicat: 24.04.2025 -
Tractable Multi-Agent Reinforcement Learning through Behavioral Economics
Publicat: 24.04.2025 -
Trust or Escalate: LLM Judges with Provable Guarantees for Human Agreement
Publicat: 24.04.2025 -
Iterative Nash Policy Optimization for Language Model Alignment
Publicat: 24.04.2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
