Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
TV & Film
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts112/v4/cf/ad/4e/cfad4ed1-d37f-8810-f021-5d604ccb55c5/mza_1517088729043895487.jpg/600x600bb.jpg
Brownstone Journal
Brownstone Institute
49 episodes
16 hours ago
Daily readings from Brownstone Institute authors, contributors, and researchers on public health, philosophy, science, and economics.
Show more...
News Commentary
News
RSS
All content for Brownstone Journal is the property of Brownstone Institute and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Daily readings from Brownstone Institute authors, contributors, and researchers on public health, philosophy, science, and economics.
Show more...
News Commentary
News
https://brownstone.org/wp-content/uploads/2025/11/Shutterstock_2367007649.jpg
Who's Afraid of the AI Boogeyman?
Brownstone Journal
15 minutes 23 seconds
1 week ago
Who's Afraid of the AI Boogeyman?
By Bert Olivier at Brownstone dot org.
It is becoming ever more obvious that many people fear rapidly developing Artificial Intelligence (AI), for various reasons, such as its supposed superiority, compared to humans, as far as processing and manipulating information is concerned, as well as its adaptability and efficiency in the workplace, which many fear would lead to the replacement of most human beings in the employment market. Amazon recently announced that it was replacing 14,000 individuals with AI robots, for example. Alex Valdes writes:
The layoffs are reportedly the largest in Amazon history, and come just months after CEO Andy Jassy outlined his vision for how the company would rapidly ramp up its development of generative AI and AI agents. The cuts are the latest in a wave of layoffs this year as tech giants including Microsoft, Accenture, Salesforce and India's TCS have reduced their workforces by thousands in what has become a frenzied push to invest in AI.
Lest this is too disturbing to tolerate, contrast this with the reassuring statement, from an AI developer, to boot, that AI agents could not replace human beings. Brian Shilhavy points out that:
Andrej Karpathy, one of the founding members of OpenAI, on Friday threw cold water on the idea that artificial general intelligence is around the corner. He also cast doubt on various assumptions about AI made by the industry's biggest boosters, such as Anthropic's Dario Amodei and OpenAI's Sam Altman.
The highly regarded Karpathy called reinforcement learning - arguably the most important area of research right now - 'terrible,' said AI-powered coding agents aren't as exciting as many people think, and said AI cannot reason about anything it hasn't already been trained on.
His comments, from a podcast interview with Dwarkesh Patel, struck a chord with some of the AI researchers we talk to, including those who have also worked at OpenAI and Anthropic. They also echoed comments we heard from researchers at the International Conference on Machine Learning earlier this year.
A lot of Karpathy's criticisms of his own field seem to boil down to a single point: As much as we like to anthropomorphize large language models, they're not comparable to humans or even animals in the way they learn.
For instance, zebras are up and walking around just a few minutes after they're born, suggesting that they're born with some level of innate intelligence, while LLMs have to go through immense trial and error to learn any new skill, Karpathy points out.
This is already comforting, but lest the fear of AI persist, it can be dispelled further by elaborating on the differences between AI and human beings, which, if understood adequately, would drive home the realisation that such anxieties are mostly redundant (although others are not, as I shall argue below). The most obvious difference in question is the fact that AI (for example, ChatGPT) is dependent on being equipped with a vast database on which it draws to come up with answers to questions, which it formulates predictively through pattern recognition. Then, as pointed out above, even the most sophisticated AI has to be 'trained' to yield the information one seeks.
Moreover, unlike humans, it lacks 'direct' access to experiential reality in perceptual, spatiotemporal terms - something which I have experienced frequently when confronted by people who draw on ChatGPT to question certain arguments. For example, when I gave a talk recently on how Freud and Hannah Arendt's work - on civilisation and totalitarianism, respectively - enables one to grasp the character of the globalist onslaught against extant society, with a view to establishing a central, AI-controlled world government, someone in the audience produced a printout of ChatGPT's response to the question, whether these two thinkers could indeed deliver the goods, as it were.
Predictably, it summarised the relevant work of these two thinkers quite adequately, but was stumped...
Brownstone Journal
Daily readings from Brownstone Institute authors, contributors, and researchers on public health, philosophy, science, and economics.