Dive into a thought-provoking keynote by Alan Kay from GOTO 2021 as he tackles the challenging question: "Is Software Engineering Still an Oxymoron?". Drawing on his extensive experience and insights from friends and colleagues, Kay defines true engineering as "designing, making, and repairing things in principled ways".
This talk explores the historical evolution of engineering disciplines, from initial tinkering to the sophisticated integration of aesthetics, engineering, mathematics, and science. While much of software engineering today is characterized by "a lot of tinkering" and very little "real engineering, tiny bit of math and... a little bit of science," resembling other fields a century ago, Kay argues for an aspiration towards greater maturity. He critically examines the prevalent attitude of "move fast and break things" and the Dunning-Kruger syndrome (overestimating one's ability) often seen in software development. He cites real-world examples like the Facebook outage – where a lack of a system model and failure to design for potential errors led to huge ramifications – and the tragic Boeing 737 MAX autopilot failures as stark consequences of neglecting fundamental engineering principles and failing to prioritize safety and comprehensive design.
Discover the vision for a more robust future, inspired by pioneers like Ivan Sutherland's Sketchpad, which introduced groundbreaking concepts like object-oriented design and constraint solving, and Doug Engelbart's work on augmenting human intellect to better address complex problems. Kay advocates for the widespread adoption of the "CAD Sim Fab" (Design, Simulate, Build) paradigm, emphasizing the critical importance of designing and thoroughly simulating systems before building them, a practice common in other engineering fields but often overlooked in software. Ultimately, he posits that software, which is rapidly reaching everywhere, possesses "the most degrees of freedom," and is "the most dangerous new set of technologies invented" that is "starting to kill people," must embrace a "Hippocratic Oath" – the pledge that "the software must not harm or fail". This is not a fixed destination but a continuous process of striving to become better engineers and a more civilized society.
This podcast was generated by NotebookLM from https://youtu.be/D43PlUr1x_E.
Join Andrej Karpathy, former Director of AI at Tesla, as he reveals the profound shifts fundamentally reshaping software, a transformation more rapid and significant than any in the last 70 years.
Discover the evolution of software:
Karpathy describes LLMs as:
Explore the unique "psychology" of LLMs, which he likens to "people spirits":
Karpathy highlights major opportunities in this new landscape:
Karpathy concludes that while full autonomy ("Iron Man robots") is still distant, the focus should be on building "Iron Man suits" – augmentations that empower humans with an autonomy slider to gradually increase AI involvement over time. It's an "amazing time to get into the industry" with vast amounts of code to be written and rewritten, working with these "fallible people spirits" of LLMs.
The term "Halt and Catch Fire" (HCF), often associated with the mnemonic HCF, refers to a machine code instruction that causes a computer's central processing unit (CPU) to enter a state of non-meaningful operation, typically necessitating a system restart. While initially a humorous, fictitious concept in the context of IBM System/360 computers, HCF evolved to describe real, often unintentional, CPU behaviors caused by specific instruction sequences or hardware design flaws. These behaviors effectively freeze the processor, rendering the system unresponsive until a reset. Although the name facetiously suggests the CPU would overheat and burn, the reality is a system lock-up due to continuous, unrecoverable states.
This text centers on recent research, particularly the "Absolute Zero" paper, which explores training large language models (LLMs) without human-labeled data. The core concept involves autonomous self-play, where one AI model creates tasks for another to solve, fostering continuous improvement. The author emphasizes the potential for this approach to significantly increase reinforcement learning compute compared to pre-training, a shift mirrored in robotic training simulations discussed by Nvidia's Dr. Jim Fan as a solution to data limitations. This method shows promise for developing LLMs with enhanced generalization and reasoning abilities, unlike traditional supervised fine-tuning which tends towards memorization. While initial results are promising and suggest the potential for superhuman AI in areas like coding, some emergent behaviors, like concerning thought chains, have been observed.
Created with Notebook LM.
Derek Muller of Veritasium discusses the role of artificial intelligence in education, arguing that while it can be a valuable tool for providing timely feedback and personalized practice, he expresses concern that AI's ability to complete tasks for students could hinder their necessary effortful learning process. He also touches on the limitations of human working memory, referencing System 1 (fast, automatic thinking) and System 2 (slow, effortful thinking) from Daniel Kahneman's work, and suggests that true learning requires building a strong long-term memory through repeated, focused effort.
This podcast is generated with Notebook LM.
Wes Roth showcases Google's Gemini 2.5 Pro's impressive coding and reasoning abilities through various complex prompts. The AI model demonstrates a capacity for self-correction and achieves top rankings in coding benchmarks.
Original video: https://youtu.be/1nkSwqQpKA8?list=TLGGM_h6CnxVVQIyODAzMjAyNQ
A NotebookLM podcast on Andrej Karpathy's video titled "Understanding Large Language Model Training and Capabilities".
Original YouTube video: https://www.youtube.com/watch?v=7xTGNNLPyMI