
The "Why We Think" from Lilian Weng, examines improving language models by allocating more computation at test time, drawing an analogy to human "slow thinking" or System 2. By treating computation as a resource, the aim is to design systems that can utilize this test-time effort effectively for better performance. Key approaches involve generating intermediate steps like Chain-of-Thought, employing decoding methods such as parallel sampling and sequential revision, using reinforcement learning to enhance reasoning, enabling external tool use, and implementing adaptive computation time. This allows models to spend more resources on analysis, similar to human deliberation, to achieve improved results.