Best AI papers explained
A podcast by Enoch H. Kang
527 Episoade
-
Almost Surely Safe LLM Inference-Time Alignment
Publicat: 23.05.2025 -
Survey of In-Context Learning Interpretation and Analysis
Publicat: 23.05.2025 -
From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models
Publicat: 23.05.2025 -
LLM In-Context Learning as Kernel Regression
Publicat: 23.05.2025 -
Where does In-context Learning Happen in Large Language Models?
Publicat: 23.05.2025 -
Auto-Differentiating Any LLM Workflow: A Farewell to Manual Prompting
Publicat: 22.05.2025 -
metaTextGrad: Learning to learn with language models as optimizers
Publicat: 22.05.2025 -
Semantic Operators: A Declarative Model for Rich, AI-based Data Processing
Publicat: 22.05.2025 -
Isolated Causal Effects of Language
Publicat: 22.05.2025 -
Sleep-time Compute: Beyond Inference Scaling at Test-time
Publicat: 22.05.2025 -
J1: Incentivizing Thinking in LLM-as-a-Judge
Publicat: 22.05.2025 -
ShiQ: Bringing back Bellman to LLMs
Publicat: 22.05.2025 -
Policy Learning with a Natural Language Action Space: A Causal Approach
Publicat: 22.05.2025 -
Multi-Objective Preference Optimization: Improving Human Alignment of Generative Models
Publicat: 22.05.2025 -
End-to-End Learning for Stochastic Optimization: A Bayesian Perspective
Publicat: 21.05.2025 -
TEXTGRAD: Automatic Differentiation via Text
Publicat: 21.05.2025 -
Steering off Course: Reliability Challenges in Steering Language Models
Publicat: 20.05.2025 -
Past-Token Prediction for Long-Context Robot Policies
Publicat: 20.05.2025 -
Recovering Coherent Event Probabilities from LLM Embeddings
Publicat: 20.05.2025 -
Systematic Meta-Abilities Alignment in Large Reasoning Models
Publicat: 20.05.2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
