Apple: Embarrassingly Simple Self-Distillation Improves Code Generation
Can a large language model (LLM) improve at code generation using only its own raw outputs, without a verifier, a teacher model, or reinforcement learning? We answer in the affirmative with ...
https://arxiv.org/abs/2604.01193