Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts125/v4/93/d8/e5/93d8e5c4-a23a-8cdf-f87d-7dc3a73cee11/mza_674862149011135675.png/600x600bb.jpg
re:verb
Calvin Pollak and Alex Helberg
100 episodes
1 month ago
On today's show, Alex and Calvin continue their series on “AI” and public discourse, focusing this time on the increasing proliferation of AI applications in government writing, policy, and social media. We characterize the second Trump administration as the "first totally post-AI presidency," which has adopted the "dumbest, most unreflective, most uncritical approach" to AI's use in communication, research, and analysis. Throughout the show, we emphasize how AI technologies are themselves rhetorical artifacts at the same time as they so often produce “bad” rhetoric, reflecting the intentions, values, and presuppositions of their creators, as well as the inherent biases of their training data and text generation models. This often results in an entry-level, overly dense writing style - often referred to as "slop" - which is almost written not to be read, but rather to fill space. We explore several concerning examples of AI's uncritical adoption by the secondTrump administration and their acolytes in the tech world. Early executive orders exhibited AI-generated formatting errors and formulaic, generic language, demonstrating a context-blind style that could lead to legal problems and erode public trust. Furthermore, the "MAHA Report" from the Office of Health and Human Services was found to fabricate studies and misrepresent findings, reflecting how large language models are "sycophantic," and can reinforce existing (often false) beliefs. Our discussion also covers Palantir's "Foundry" product, which aims to combine diverse government datasets, raising significant privacy and political concerns, especially given the political leanings of Palantir’s founders. Finally, we examine xAI’s Grok chatbot (run by Elon Musk), which illustrates how tech elites can exert incredible political power through direct interventions in AI tools’ system prompts - which in recent months has led Grok to parrot conspiracy theories and make explicit antisemitic remarks on the public feeds of X/Twitter. Ultimately, our analyses emphasizes - once again - that these so-called “AI” technologies are not neutral; they are, in the words of Matteo Pasquinelli, "crystallization[s] of a productive social process" that "reinforce the power structure that underlies [them]," perpetuating existing inequalities. Understanding these mechanisms and engaging in what Pasquinelli terms "de-connectionism" - undoing the social and economic fabric constituting these systems - is essential for critiquing the structural factors and power dynamics that AI reproduces in public discourse. Have any questions or concerns about this episode? Reach out to our new custom-tuned chatbot, @Bakh_reverb on X/Twitter! Examples Analyzed in this Episode: Trump Admin Accused of Using AI to Draft Executive Orders https://www.yahoo.com/news/trump-admin-accused-using-ai-191117579.html Eryk Salvaggio - “Musk, AI, and the Weaponization of ‘Administrative Error’” https://www.techpolicy.press/musk-ai-and-the-weaponization-of-administrative-error/  Emily Kennard & Margaret Manto (NOTUS) - “The MAHA Report Cites Studies That Don’t Exist” - https://archive.ph/WVIrT  Sheera Frenkel & Aaron Krolik (NYT) - “Trump Taps Palantir to Compile Data on Americans” https://www.nytimes.com/2025/05/30/technology/trump-palantir-data-americans.html David Klepper - “Gabbard says AI is speeding up intel work, including the release of the JFK assassination files” https://apnews.com/article/gabbard-trump-ai-amazon-intelligence-beca4c4e25581e52de5343244e995e78 Miles Klee - “Elon Musk’s Grok Chatbot Goes Full Nazi, Calls Itself ‘MechaHitler’” - https://archive.ph/SdoJn  Works & Concepts Cited in this Episode: Bakhtin, M. M. (2010). The dialogic imagination: Four essays. University of Texas Press. Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code (1st ed.). Polity. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623). Our previous episode with Dr. Bender about her work Burke, K. (1984). Permanence and change: An anatomy of purpose. Univ of California Press. Burke, K. (1965). Terministic screens. In Proceedings of the American Catholic philosophical association (Vol. 39, pp. 87-102). DeLuca, L. S., Reinhart, A., Weinberg, G., Laudenbach, M., Miller, S., & Brown, D. W. (2025). Developing Students’ Statistical Expertise Through Writing in the Age of AI. Journal of Statistics and Data Science Education, 1-13. Haggerty, K. D., & Ericson, R. V. (2017). The surveillant assemblage. Surveillance, crime and social control, 61-78. Hill, K. (2025, 13 June). “They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.” The New York Times. Markey, B., Brown, D. W., Laudenbach, M., & Kohler, A. (2024). Dense and disconnected: Analyzing the sedimented style of ChatGPT-generated text at scale. Written Communication, 41(4), 571-600. Miller, C. R. (1984). Genre as social action. Quarterly journal of speech, 70(2), 151-167. Murakami, H. (1994). Dance dance dance : a novel (1st ed.). Kodansha International. Pasquinelli, M. (2023). The eye of the master: A social history of artificial intelligence. Verso Books. Reinhart, A., Markey, B., Laudenbach, M., Pantusen, K., Yurko, R., Weinberg, G., & Brown, D. W. (2025). Do LLMs write like humans? Variation in grammatical and rhetorical styles. Proceedings of the National Academy of Sciences, 122(8), e2422455122. An accessible transcript for this episode can be found here (via Descript)
Show more...
News
Education,
Society & Culture,
Philosophy
RSS
All content for re:verb is the property of Calvin Pollak and Alex Helberg and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
On today's show, Alex and Calvin continue their series on “AI” and public discourse, focusing this time on the increasing proliferation of AI applications in government writing, policy, and social media. We characterize the second Trump administration as the "first totally post-AI presidency," which has adopted the "dumbest, most unreflective, most uncritical approach" to AI's use in communication, research, and analysis. Throughout the show, we emphasize how AI technologies are themselves rhetorical artifacts at the same time as they so often produce “bad” rhetoric, reflecting the intentions, values, and presuppositions of their creators, as well as the inherent biases of their training data and text generation models. This often results in an entry-level, overly dense writing style - often referred to as "slop" - which is almost written not to be read, but rather to fill space. We explore several concerning examples of AI's uncritical adoption by the secondTrump administration and their acolytes in the tech world. Early executive orders exhibited AI-generated formatting errors and formulaic, generic language, demonstrating a context-blind style that could lead to legal problems and erode public trust. Furthermore, the "MAHA Report" from the Office of Health and Human Services was found to fabricate studies and misrepresent findings, reflecting how large language models are "sycophantic," and can reinforce existing (often false) beliefs. Our discussion also covers Palantir's "Foundry" product, which aims to combine diverse government datasets, raising significant privacy and political concerns, especially given the political leanings of Palantir’s founders. Finally, we examine xAI’s Grok chatbot (run by Elon Musk), which illustrates how tech elites can exert incredible political power through direct interventions in AI tools’ system prompts - which in recent months has led Grok to parrot conspiracy theories and make explicit antisemitic remarks on the public feeds of X/Twitter. Ultimately, our analyses emphasizes - once again - that these so-called “AI” technologies are not neutral; they are, in the words of Matteo Pasquinelli, "crystallization[s] of a productive social process" that "reinforce the power structure that underlies [them]," perpetuating existing inequalities. Understanding these mechanisms and engaging in what Pasquinelli terms "de-connectionism" - undoing the social and economic fabric constituting these systems - is essential for critiquing the structural factors and power dynamics that AI reproduces in public discourse. Have any questions or concerns about this episode? Reach out to our new custom-tuned chatbot, @Bakh_reverb on X/Twitter! Examples Analyzed in this Episode: Trump Admin Accused of Using AI to Draft Executive Orders https://www.yahoo.com/news/trump-admin-accused-using-ai-191117579.html Eryk Salvaggio - “Musk, AI, and the Weaponization of ‘Administrative Error’” https://www.techpolicy.press/musk-ai-and-the-weaponization-of-administrative-error/  Emily Kennard & Margaret Manto (NOTUS) - “The MAHA Report Cites Studies That Don’t Exist” - https://archive.ph/WVIrT  Sheera Frenkel & Aaron Krolik (NYT) - “Trump Taps Palantir to Compile Data on Americans” https://www.nytimes.com/2025/05/30/technology/trump-palantir-data-americans.html David Klepper - “Gabbard says AI is speeding up intel work, including the release of the JFK assassination files” https://apnews.com/article/gabbard-trump-ai-amazon-intelligence-beca4c4e25581e52de5343244e995e78 Miles Klee - “Elon Musk’s Grok Chatbot Goes Full Nazi, Calls Itself ‘MechaHitler’” - https://archive.ph/SdoJn  Works & Concepts Cited in this Episode: Bakhtin, M. M. (2010). The dialogic imagination: Four essays. University of Texas Press. Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code (1st ed.). Polity. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623). Our previous episode with Dr. Bender about her work Burke, K. (1984). Permanence and change: An anatomy of purpose. Univ of California Press. Burke, K. (1965). Terministic screens. In Proceedings of the American Catholic philosophical association (Vol. 39, pp. 87-102). DeLuca, L. S., Reinhart, A., Weinberg, G., Laudenbach, M., Miller, S., & Brown, D. W. (2025). Developing Students’ Statistical Expertise Through Writing in the Age of AI. Journal of Statistics and Data Science Education, 1-13. Haggerty, K. D., & Ericson, R. V. (2017). The surveillant assemblage. Surveillance, crime and social control, 61-78. Hill, K. (2025, 13 June). “They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.” The New York Times. Markey, B., Brown, D. W., Laudenbach, M., & Kohler, A. (2024). Dense and disconnected: Analyzing the sedimented style of ChatGPT-generated text at scale. Written Communication, 41(4), 571-600. Miller, C. R. (1984). Genre as social action. Quarterly journal of speech, 70(2), 151-167. Murakami, H. (1994). Dance dance dance : a novel (1st ed.). Kodansha International. Pasquinelli, M. (2023). The eye of the master: A social history of artificial intelligence. Verso Books. Reinhart, A., Markey, B., Laudenbach, M., Pantusen, K., Yurko, R., Weinberg, G., & Brown, D. W. (2025). Do LLMs write like humans? Variation in grammatical and rhetorical styles. Proceedings of the National Academy of Sciences, 122(8), e2422455122. An accessible transcript for this episode can be found here (via Descript)
Show more...
News
Education,
Society & Culture,
Philosophy
https://images.squarespace-cdn.com/content/v1/5a8cd4b2bff200415cdfb698/1748610020915-KEKL3BQT6X1MAG8GNX4I/RV_E102-thumb.png?format=1500w
E102: Escape from the University of the Cancelled
re:verb
1 hour 14 minutes 36 seconds
5 months ago
E102: Escape from the University of the Cancelled
In this episode, Alex and Calvin return to a favorite hobby horse: the University of Austin (UATX). First discussed back in episode 62, this ultra-conservative "university concept" is still not accredited and has no undergraduate degrees planned until at least 2028-2031. In that previous episode, we described UATX variously as right-wing academia’s answer to the Fyre Festival and a pitch deck/PowerPoint scam masquerading as an education; this time, we call it a fast-casual university concept (Chipotle for higher ed). We catch up with the myriad ways that UATX continues to struggle under the weight of its own internal contradictions, while occasionally benefitting from being confused for UT Austin (home of some of our favorite previous guests, like Scott Graham and Karma Chávez). After taking stock of US free speech generally in the age of seemingly intractable US-led conflicts in the Middle East and the criminalization of student peace activism, we examine a Quillette article from Ellie Avishai asking if UATX is betraying its founding principles. As Avishai explains, her UATX research center was terminated in response to her posting a rather benign (and ideologically nuanced) LinkedIn post about DEI. We discuss how UATX's claims of championing academic freedom and viewpoint diversity necessarily conflict with its increasingly extreme anti-woke conservative agenda. Given that it is bankrolled by dark money funders and figures connected to corporate interests and political power like Harlan Crow and Joe Lonsdale, the institution appears more dedicated to fortifying right-wing ideas and providing a filter bubble than fostering genuine free inquiry. This makes it particularly ironic that its corporate doublespeak response to Avishai's termination was to use language like "wind up Mill" and "restructure." In these ways, UATX seems to combine the worst of mainstream academia (neoliberal austerity measures justified through corporate doublespeak) with new heights of conservative radicalism.  Drawing on Noah Rawlings' piece in The New Inquiry, we peek into the "Forbidden Courses" summer program held at Harlan Crow's Old Parkland office complex in Dallas, where figures like Peter Boghossian and Katie Roiphe hold court. What does it mean for a university to exist primarily as a "safe space" isolating students from opposition, or worse, a "money and influence laundering operation for some of the most abhorrent ideas" (as Alex calls it)? We conclude that despite the real structural flaws in mainstream academia, the pursuit of knowledge and evidence-based argumentation is still vital in higher ed, but it’s something that UATX seems fundamentally opposed to. Articles Analyzed in this Episode “Is the University Of Austin Betraying Its Founding Principles?” by Ellie Avishai (in Quillette) “An American Education: Notes from UATX” - Noah Rawlings (in The New Inquiry) Previous Episodes Referenced E62: re:joinder - The University of the Cancelled Works and Concepts Cited Van Dijk, T. A. (1993). Principles of critical discourse analysis. Discourse & society, 4(2), 249-283.
re:verb
On today's show, Alex and Calvin continue their series on “AI” and public discourse, focusing this time on the increasing proliferation of AI applications in government writing, policy, and social media. We characterize the second Trump administration as the "first totally post-AI presidency," which has adopted the "dumbest, most unreflective, most uncritical approach" to AI's use in communication, research, and analysis. Throughout the show, we emphasize how AI technologies are themselves rhetorical artifacts at the same time as they so often produce “bad” rhetoric, reflecting the intentions, values, and presuppositions of their creators, as well as the inherent biases of their training data and text generation models. This often results in an entry-level, overly dense writing style - often referred to as "slop" - which is almost written not to be read, but rather to fill space. We explore several concerning examples of AI's uncritical adoption by the secondTrump administration and their acolytes in the tech world. Early executive orders exhibited AI-generated formatting errors and formulaic, generic language, demonstrating a context-blind style that could lead to legal problems and erode public trust. Furthermore, the "MAHA Report" from the Office of Health and Human Services was found to fabricate studies and misrepresent findings, reflecting how large language models are "sycophantic," and can reinforce existing (often false) beliefs. Our discussion also covers Palantir's "Foundry" product, which aims to combine diverse government datasets, raising significant privacy and political concerns, especially given the political leanings of Palantir’s founders. Finally, we examine xAI’s Grok chatbot (run by Elon Musk), which illustrates how tech elites can exert incredible political power through direct interventions in AI tools’ system prompts - which in recent months has led Grok to parrot conspiracy theories and make explicit antisemitic remarks on the public feeds of X/Twitter. Ultimately, our analyses emphasizes - once again - that these so-called “AI” technologies are not neutral; they are, in the words of Matteo Pasquinelli, "crystallization[s] of a productive social process" that "reinforce the power structure that underlies [them]," perpetuating existing inequalities. Understanding these mechanisms and engaging in what Pasquinelli terms "de-connectionism" - undoing the social and economic fabric constituting these systems - is essential for critiquing the structural factors and power dynamics that AI reproduces in public discourse. Have any questions or concerns about this episode? Reach out to our new custom-tuned chatbot, @Bakh_reverb on X/Twitter! Examples Analyzed in this Episode: Trump Admin Accused of Using AI to Draft Executive Orders https://www.yahoo.com/news/trump-admin-accused-using-ai-191117579.html Eryk Salvaggio - “Musk, AI, and the Weaponization of ‘Administrative Error’” https://www.techpolicy.press/musk-ai-and-the-weaponization-of-administrative-error/  Emily Kennard & Margaret Manto (NOTUS) - “The MAHA Report Cites Studies That Don’t Exist” - https://archive.ph/WVIrT  Sheera Frenkel & Aaron Krolik (NYT) - “Trump Taps Palantir to Compile Data on Americans” https://www.nytimes.com/2025/05/30/technology/trump-palantir-data-americans.html David Klepper - “Gabbard says AI is speeding up intel work, including the release of the JFK assassination files” https://apnews.com/article/gabbard-trump-ai-amazon-intelligence-beca4c4e25581e52de5343244e995e78 Miles Klee - “Elon Musk’s Grok Chatbot Goes Full Nazi, Calls Itself ‘MechaHitler’” - https://archive.ph/SdoJn  Works & Concepts Cited in this Episode: Bakhtin, M. M. (2010). The dialogic imagination: Four essays. University of Texas Press. Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code (1st ed.). Polity. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623). Our previous episode with Dr. Bender about her work Burke, K. (1984). Permanence and change: An anatomy of purpose. Univ of California Press. Burke, K. (1965). Terministic screens. In Proceedings of the American Catholic philosophical association (Vol. 39, pp. 87-102). DeLuca, L. S., Reinhart, A., Weinberg, G., Laudenbach, M., Miller, S., & Brown, D. W. (2025). Developing Students’ Statistical Expertise Through Writing in the Age of AI. Journal of Statistics and Data Science Education, 1-13. Haggerty, K. D., & Ericson, R. V. (2017). The surveillant assemblage. Surveillance, crime and social control, 61-78. Hill, K. (2025, 13 June). “They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.” The New York Times. Markey, B., Brown, D. W., Laudenbach, M., & Kohler, A. (2024). Dense and disconnected: Analyzing the sedimented style of ChatGPT-generated text at scale. Written Communication, 41(4), 571-600. Miller, C. R. (1984). Genre as social action. Quarterly journal of speech, 70(2), 151-167. Murakami, H. (1994). Dance dance dance : a novel (1st ed.). Kodansha International. Pasquinelli, M. (2023). The eye of the master: A social history of artificial intelligence. Verso Books. Reinhart, A., Markey, B., Laudenbach, M., Pantusen, K., Yurko, R., Weinberg, G., & Brown, D. W. (2025). Do LLMs write like humans? Variation in grammatical and rhetorical styles. Proceedings of the National Academy of Sciences, 122(8), e2422455122. An accessible transcript for this episode can be found here (via Descript)