Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
News
Sports
TV & Film
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/d8/b7/27/d8b72741-4a96-73a6-e98e-c6c2402e48ec/mza_11654084090888999774.jpg/600x600bb.jpg
LessWrong (Curated & Popular)
LessWrong
658 episodes
17 hours ago
1. I have claimed that one of the fundamental questions of rationality is “what am I about to do and what will happen next?” One of the domains I ask this question the most is in social situations. There are a great many skills in the world. If I had the time and resources to do so, I’d want to master all of them. Wilderness survival, automotive repair, the Japanese language, calculus, heart surgery, French cooking, sailing, underwater basket weaving, architecture, Mexican cooking,...
Show more...
Technology
Society & Culture,
Philosophy
RSS
All content for LessWrong (Curated & Popular) is the property of LessWrong and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
1. I have claimed that one of the fundamental questions of rationality is “what am I about to do and what will happen next?” One of the domains I ask this question the most is in social situations. There are a great many skills in the world. If I had the time and resources to do so, I’d want to master all of them. Wilderness survival, automotive repair, the Japanese language, calculus, heart surgery, French cooking, sailing, underwater basket weaving, architecture, Mexican cooking,...
Show more...
Technology
Society & Culture,
Philosophy
Episodes (20/658)
LessWrong (Curated & Popular)
[Linkpost] “Emergent Introspective Awareness in Large Language Models” by Drake Thomas
This is a link post. New Anthropic research (tweet, blog post, paper): We investigate whether large language models can introspect on their internal states. It is difficult to answer this question through conversation alone, as genuine introspection cannot be distinguished from confabulations. Here, we address this challenge by injecting representations of known concepts into a model's activations, and measuring the influence of these manipulations on the model's self-reported states. We f...
Show more...
1 day ago
3 minutes

LessWrong (Curated & Popular)
[Linkpost] “You’re always stressed, your mind is always busy, you never have enough time” by mingyuan
This is a link post. You have things you want to do, but there's just never time. Maybe you want to find someone to have kids with, or maybe you want to spend more or higher-quality time with the family you already have. Maybe it's a work project. Maybe you have a musical instrument or some sports equipment gathering dust in a closet, or there's something you loved doing when you were younger that you want to get back into. Whatever it is, you can’t find the time for it. And yet you somehow f...
Show more...
1 day ago
4 minutes

LessWrong (Curated & Popular)
“LLM-generated text is not testimony” by TsviBT
Crosspost from my blog. Synopsis When we share words with each other, we don't only care about the words themselves. We care also—even primarily—about the mental elements of the human mind/agency that produced the words. What we want to engage with is those mental elements. As of 2025, LLM text does not have those elements behind it. Therefore LLM text categorically does not serve the role for communication that is served by real text. Therefore the norm should be that you don't share L...
Show more...
2 days ago
19 minutes

LessWrong (Curated & Popular)
“Post title: Why I Transitioned: A Case Study” by Fiora Sunshine
An Overture Famously, trans people tend not to have great introspective clarity into their own motivations for transition. Intuitively, they tend to be quite aware of what they do and don't like about inhabiting their chosen bodies and gender roles. But when it comes to explaining the origins and intensity of those preferences, they almost universally to come up short. I've even seen several smart, thoughtful trans people, such as Natalie Wynn, making statements to the effect that it's imp...
Show more...
2 days ago
17 minutes

LessWrong (Curated & Popular)
“The Memetics of AI Successionism” by Jan_Kulveit
TL;DR: AI progress and the recognition of associated risks are painful to think about. This cognitive dissonance acts as fertile ground in the memetic landscape, a high-energy state that will be exploited by novel ideologies. We can anticipate cultural evolution will find viable successionist ideologies: memeplexes that resolve this tension by framing the replacement of humanity by AI not as a catastrophe, but as some combination of desirable, heroic, or inevitable outcome. This post mostly ...
Show more...
4 days ago
21 minutes

LessWrong (Curated & Popular)
“How Well Does RL Scale?” by Toby_Ord
This is the latest in a series of essays on AI Scaling. You can find the others on my site. Summary: RL-training for LLMs scales surprisingly poorly. Most of its gains are from allowing LLMs to productively use longer chains of thought, allowing them to think longer about a problem. There is some improvement for a fixed length of answer, but not enough to drive AI progress. Given the scaling up of pre-training compute also stalled, we'll see less AI progress via compute scaling than you ...
Show more...
5 days ago
16 minutes

LessWrong (Curated & Popular)
“An Opinionated Guide to Privacy Despite Authoritarianism” by TurnTrout
I've created a highly specific and actionable privacy guide, sorted by importance and venturing several layers deep into the privacy iceberg. I start with the basics (password manager) but also cover the obscure (dodging the millions of Bluetooth tracking beacons which extend from stores to traffic lights; anti-stingray settings; flashing GrapheneOS on a Pixel). I feel strongly motivated by current events, but the guide also contains a large amount of timeless technical content. Here's a pre...
Show more...
5 days ago
7 minutes

LessWrong (Curated & Popular)
“Cancer has a surprising amount of detail” by Abhishaike Mahajan
There is a very famous essay titled ‘Reality has a surprising amount of detail’. The thesis of the article is that reality is filled, just filled, with an incomprehensible amount of materially important information, far more than most people would naively expect. Some of this detail is inherent in the physical structure of the universe, and the rest of it has been generated by centuries of passionate humans imbibing the subject with idiosyncratic convention. In either case, the detail is ver...
Show more...
6 days ago
23 minutes

LessWrong (Curated & Popular)
“AIs should also refuse to work on capabilities research” by Davidmanheim
There's a strong argument that humans should stop trying to build more capable AI systems, or at least slow down progress. The risks are plausibly large but unclear, and we’d prefer not to die. But the roadmaps of the companies pursuing these systems envision increasingly agentic AI systems taking over the key tasks of researching and building superhuman AI systems, and humans will therefore have a decreasing ability to make many key decisions. In the near term, humanity could stop, but seem...
Show more...
1 week ago
6 minutes

LessWrong (Curated & Popular)
“On Fleshling Safety: A Debate by Klurl and Trapaucius.” by Eliezer Yudkowsky
(23K words; best considered as nonfiction with a fictional-dialogue frame, not a proper short story.) Prologue: Klurl and Trapaucius were members of the machine race. And no ordinary citizens they, but Constructors: licensed, bonded, and insured; proven, experienced, and reputed. Together Klurl and Trapaucius had collaborated on such famed artifices as the Eternal Clock, Silicon Sphere, Wandering Flame, and Diamond Book; and as individuals, both had constructed wonders too numerous to nu...
Show more...
1 week ago
2 hours 22 minutes

LessWrong (Curated & Popular)
“EU explained in 10 minutes” by Martin Sustrik
If you want to understand a country, you should pick a similar country that you are already familiar with, research the differences between the two and there you go, you are now an expert. But this approach doesn’t quite work for the European Union. You might start, for instance, by comparing it to the United States, assuming that EU member countries are roughly equivalent to U.S. states. But that analogy quickly breaks down. The deeper you dig, the more confused you become. You try with...
Show more...
1 week ago
16 minutes

LessWrong (Curated & Popular)
“Cheap Labour Everywhere” by Morpheus
I recently visited my girlfriend's parents in India. Here is what that experience taught me: Yudkowsky has this facebook post where he makes some inferences about the economy after noticing two taxis stayed in the same place while he got his groceries. I had a few similar experiences while I was in India, though sadly I don't remember them in enough detail to illustrate them in as much detail as that post. Most of the thoughts relating to economics revolved around how labour in India is ex...
Show more...
1 week ago
3 minutes

LessWrong (Curated & Popular)
[Linkpost] “Consider donating to AI safety champion Scott Wiener” by Eric Neyman
This is a link post. Written in my personal capacity. Thanks to many people for conversations and comments. Written in less than 24 hours; sorry for any sloppiness. It's an uncanny, weird coincidence that the two biggest legislative champions for AI safety in the entire country announced their bids for Congress just two days apart. But here we are. On Monday, I put out a long blog post making the case for donating to Alex Bores, author of the New York RAISE Act. And today I’m doing th...
Show more...
1 week ago
2 minutes

LessWrong (Curated & Popular)
“Which side of the AI safety community are you in?” by Max Tegmark
In recent years, I’ve found that people who self-identify as members of the AI safety community have increasingly split into two camps: Camp A) "Race to superintelligence safely”: People in this group typically argue that "superintelligence is inevitable because of X”, and it's therefore better that their in-group (their company or country) build it first. X is typically some combination of “Capitalism”, “Molloch”, “lack of regulation” and “China”. Camp B) “Don’t race to superintelligenc...
Show more...
1 week ago
4 minutes

LessWrong (Curated & Popular)
“Doomers were right” by Algon
There's an argument I sometimes hear against existential risks, or any other putative change that some are worried about, that goes something like this: 'We've seen time after time that some people will be afraid of any change. They'll say things like "TV will destroy people's ability to read", "coffee shops will destroy the social order","machines will put textile workers out of work". Heck, Socrates argued that books would harm people's ability to memorize things. So many prophets of doo...
Show more...
1 week ago
4 minutes

LessWrong (Curated & Popular)
“Do One New Thing A Day To Solve Your Problems” by Algon
People don't explore enough. They rely on cached thoughts and actions to get through their day. Unfortunately, this doesn't lead to them making progress on their problems. The solution is simple. Just do one new thing a day to solve one of your problems. Intellectually, I've always known that annoying, persistent problems often require just 5 seconds of actual thought. But seeing a number of annoying problems that made my life worse, some even major ones, just yield to the repeated applic...
Show more...
1 week ago
3 minutes

LessWrong (Curated & Popular)
“Humanity Learned Almost Nothing From COVID-19” by niplav
Summary: Looking over humanity's response to the COVID-19 pandemic, almostsix years later, reveals that we've forgotten to fulfill our intent atpreparing for the next pandemic. I rant. content warning: A single carefully placed slur. If we want to create a world free of pandemics and other biologicalcatastrophes, the time to act is now. —US White House, “ FACT SHEET: The Biden Administration's Historic Investment in Pandemic Preparedness and Biodefense in the FY 2023 President's Budget...
Show more...
2 weeks ago
8 minutes

LessWrong (Curated & Popular)
“Consider donating to Alex Bores, author of the RAISE Act” by Eric Neyman
Written by Eric Neyman, in my personal capacity. The views expressed here are my own. Thanks to Zach Stein-Perlman, Jesse Richardson, and many others for comments. Over the last several years, I’ve written a bunch of posts about politics and political donations. In this post, I’ll tell you about one of the best donation opportunities that I’ve ever encountered: donating to Alex Bores, who announced his campaign for Congress today. If you’re potentially interested in donating to Bores, my...
Show more...
2 weeks ago
50 minutes

LessWrong (Curated & Popular)
“Meditation is dangerous” by Algon
Here's a story I've heard a couple of times. A youngish person is looking for some solutions to their depression, chronic pain, ennui or some other cognitive flaw. They're open to new experiences and see a meditator gushing about how amazing meditation is for joy, removing suffering, clearing one's mind, improving focus etc. They invite the young person to a meditation retreat. The young person starts making decent progress. Then they have a psychotic break and their life is ruined for years...
Show more...
2 weeks ago
7 minutes

LessWrong (Curated & Popular)
“That Mad Olympiad” by Tomás B.
"I heard Chen started distilling the day after he was born. He's only four years old, if you can believe it. He's written 18 novels. His first words were, "I'm so here for it!" Adrian said. He's my little brother. Mom was busy in her world model. She says her character is like a "villainess" or something - I kinda worry it's a sex thing. It's for sure a sex thing. Anyway, she was busy getting seduced or seducing or whatever villanesses do in world models, so I had to escort Adrian to Oak ...
Show more...
2 weeks ago
26 minutes

LessWrong (Curated & Popular)
1. I have claimed that one of the fundamental questions of rationality is “what am I about to do and what will happen next?” One of the domains I ask this question the most is in social situations. There are a great many skills in the world. If I had the time and resources to do so, I’d want to master all of them. Wilderness survival, automotive repair, the Japanese language, calculus, heart surgery, French cooking, sailing, underwater basket weaving, architecture, Mexican cooking,...