
The DeepSeek-Prover project aims to advance large language model capabilities in formal theorem proving by addressing the scarcity of training data. It uses autoformalization to convert informal high school and undergraduate math competition problems into formal statements, generating a large dataset of 8 million synthetic proofs. Quality filtering and formal verification with Lean 4 ensure data reliability. An iterative process enhances the model, leading to state-of-the-art performance on miniF2F and FIMO benchmarks, outperforming models like GPT-4.