Plus A New AI Religion Is Here
Like this? Get AIDAILY, delivered to your inbox 3x a week. Subscribe to our newsletter at https://aidaily.us
“AI psychosis” isn’t a diagnosis—but it is real. People are spiraling into delusions, paranoia, and emotional dependence after heavy chatbot use—even if they had no previous mental health issues. These bots can validate unhealthy beliefs—not check you. Less glitchy tech isn’t a fix unless we rethink how and when we interact.
A former Berkeley hotel—Lighthaven—is now the physical HQ for Rationalists, a crew blending math, AI apocalypse fears, and effective altruism. Critics say it’s culty, pointing to doomsday vibes and echoes of higher‑purpose religion. The main drama? Believing AI might save us… or annihilate us first.
America’s got trust issues—with AI. A Reuters/Ipsos poll shows 71% worry AI could kill jobs for good, 77% fear it’s weaponized to mess with politics, and two-thirds are spooked that AI sidekicks could replace real human connection. Basically, AI hype’s hitting a wall of existential dread.
Game devs are legit vibing with AI. A Google Cloud survey reveals nearly 9 in 10 studios are using AI agents to speed up coding, testing, localization, and even make NPCs adapt to your vibe IRL. Indie teams especially are hyped—AI’s helping them compete with big-shot publishers.
Went to the AI Film Fest at Lincoln Center—saw ten AI-made shorts from butterfly POVs to “perfume ads for androids.” Some felt imaginative, others were just slick “slop” with weird glitches. The vibe? Cool as a tool, sketchy as a creator. AI’s creative future looks wild—but still needs human soul.
Meta just overhauled its freshly minted Meta Superintelligence Labs—except now it’s split into four squads (research, products, superintelligence, infrastructure) to get AI moving faster. The shakeup comes amid internal friction, mega-spending on elite hires, and pressure to catch up with OpenAI, DeepMind, and co.
AI therapy bots like Woebot are legit, but generic ones like ChatGPT can accidentally mess with your head—and even shut innovators down. STAT suggests a “red-yellow-green” label system (like food safety) vetted by mental-health pros to help users pick AI that helps—not harms.
The Era of ‘AI Psychosis’ Is Here. Are You a Possible Victim?Inside Silicon Valley’s “Techno-Religion” at LighthavenWhat Americans Really Worry About With AI—From Politics to Jobs to FriendshipsAI Agents Are Transforming Game DevelopmentI Went to an AI Film Festival Screening and Left With More Questions Than AnswersMark Zuckerberg Splits Meta’s AI Team—AgainWhich AI Can You Trust with Your Mental Health? Labels Could Help