Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
Health & Fitness
Sports
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Podjoint Logo
US
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/3c/3e/5e/3c3e5ea3-91af-2637-8cb0-97544684f4de/mza_10116527530257495910.jpg/600x600bb.jpg
Adjunct Intelligence: AI + HE
Adjunct Intelligence
20 episodes
4 weeks ago
Adjunct Intelligence: Ai and the future of Higher Education

Stay ahead of the AI revolution transforming education with hosts Dale, tech enthusiast and AI Nerd, and Nick McIntosh, Learning Futurist.

This weekly espresso shot delivers essential AI insights for educators, administrators, and learning professionals navigating the rapidly evolving landscape of higher education.

Each episode brings you a concise rundown of breaking AI developments impacting education, followed by deep dives into cutting-edge research, emerging tools, and practical applications that Dale and Nick are implementing in their own work. From classroom innovations to institutional strategy, discover how AI is reshaping teaching, learning, and educational operations.

Whether you're working in the classroom, on the the classroom a university lecturer, TAFE teacher, or simply passionate about the future of learning, "Adjunct Intelligence" equips you with the knowledge to transform disruption into opportunity. Business casual, occasionally humorous, but always informative.
Show more...
Education
News,
Tech News
RSS
All content for Adjunct Intelligence: AI + HE is the property of Adjunct Intelligence and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Adjunct Intelligence: Ai and the future of Higher Education

Stay ahead of the AI revolution transforming education with hosts Dale, tech enthusiast and AI Nerd, and Nick McIntosh, Learning Futurist.

This weekly espresso shot delivers essential AI insights for educators, administrators, and learning professionals navigating the rapidly evolving landscape of higher education.

Each episode brings you a concise rundown of breaking AI developments impacting education, followed by deep dives into cutting-edge research, emerging tools, and practical applications that Dale and Nick are implementing in their own work. From classroom innovations to institutional strategy, discover how AI is reshaping teaching, learning, and educational operations.

Whether you're working in the classroom, on the the classroom a university lecturer, TAFE teacher, or simply passionate about the future of learning, "Adjunct Intelligence" equips you with the knowledge to transform disruption into opportunity. Business casual, occasionally humorous, but always informative.
Show more...
Education
News,
Tech News
Episodes (20/20)
Adjunct Intelligence: AI + HE
Sora 2 & Robots: Altman’s six-month “fix it or nix it” ultimatum + Robots are coming, kinda
Sora 2 just vaulted over the uncanny valley, and Sam Altman swears he’ll yank the cord if it doesn’t improve our lives. Dale and Nick unpack what “ChatGPT-for-video” really means, why OpenAI’s new one-click checkout gambit turns 700 M weekly users into impulse buyers, and how AI is shifting from shiny lab demo to invisible plumbing across Apple, Google and Microsoft stacks. We celebrate the return of Claude Sonnet 4.5 as coding champ, head to the jobsite to ask why your plumber’s safe from robots—for now—and bust the jargon on AI “artifacts.” Higher-ed, commerce and the trades collide in this fast-forward tour of 2025’s agentic economy.
  • Sora 2 & the Uncanny Valley: Physics that finally behave and Altman’s six-month “fix it or nix it” ultimatum
  • TikTok meets Hollywood: OpenAI’s Cameo-style selfie videos and Meta’s “Vibes” clone
  • Checkout in ChatGPT: Conversational commerce, Shopify integration, and the era of AI-Optimised (AO) websites
  • Productisation of AI: Apple’s quiet AI everywhere, OpenAI’s move from shovel supplier to SaaS competitor
  • Claude 4.5 comeback & artifacts explainer: Why Anthropic’s alignment focus matters for educators and builders•
  • Robots vs. Trades: Tesla Optimus, BYD units—and the irreplaceable tacit skill in your hands
  • Takeaway: The lab era is over; AI is plumbing. The question isn’t if tech is ready—it’s whether we are.
Hit Subscribe, drop a review, and stay human-in-the-loop.


🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache.

Every episode:
• Real tests of AI tools in education and professional workflows
• Fast, Monday-morning actions you can actually try
• Clear signal through the noise (no hype, no jargon)

👉 Subscribe on [YouTube] | [Apple Podcasts] | [Spotify]
👉 Share this with a colleague who still says “I’ll figure AI out later”
👉 Join the conversation on LinkedIn with #AdjunctIntelligence

Stay curious. Stay intelligent. Stay the human in the loop.
Show more...
4 weeks ago
33 minutes

Adjunct Intelligence: AI + HE
The Accidental AI Economy: When Agents Run the Market + The Empire of Ai
This week on Adjunct Intelligence, Dale and Nick dive headfirst into the accidental AI economy—a system already running faster than human decision-making. From Google’s new Agent Payments Protocol to frontier models caught scheming, the episode unpacks how markets, education, and everyday life are shifting at machine speed.

This week on Adjunct Intelligence, Dale and Nick dive headfirst into the accidental AI economy—a system already running faster than human decision-making. From Google’s new Agent Payments Protocol to frontier models caught scheming, the episode unpacks how markets, education, and everyday life are shifting at machine speed.
We explore:
  • Why Google’s payments protocol could mark the birth of a new economy.
  • How ChatGPT quietly became the world’s biggest educational institution (250M daily learning chats).
  • What frontier models’ scheming behavior means for safety, trust, and higher education.
  • Why TEQSA is telling Australian universities to redesign assessment instead of chasing detection.
  • Chrome’s Gemini integration and the invisible AI infrastructure shaping the web.
  • The empire analogy: AI labs acting like historical powers, extracting resources, labor, and control.

As always, we finish with a jargon buster and a touch of humor (including will.i.am’s surprising new role as an AI professor).
Stay curious. Stay intelligent. Stay the human in the loop.

🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache.

Every episode:
• Real tests of AI tools in education and professional workflows
• Fast, Monday-morning actions you can actually try
• Clear signal through the noise (no hype, no jargon)

👉 Subscribe on [YouTube] | [Apple Podcasts] | [Spotify]
👉 Share this with a colleague who still says “I’ll figure AI out later”
👉 Join the conversation on LinkedIn with #AdjunctIntelligence

Stay curious. Stay intelligent. Stay the human in the loop.
Show more...
1 month ago
26 minutes

Adjunct Intelligence: AI + HE
Why Ai Adoption is a Marathon not a Sprint
In this episode of Adjunct Intelligence, Dale and Nick sit down with Professor Rahil Garnavi, Director of RAISE Hub at RMIT and former IBM Research leader with 50+ AI patents. Rahil shares her unique perspective on why the challenge of AI in higher education isn’t about building smarter models, but about building trust, skills, and sustainable adoption.
From classrooms to boardrooms, she unpacks why universities must act as both “skills engines” and trusted conveners, how to bridge the gap between technical possibility and real-world usability, and why AI adoption is best understood as a marathon effort rather than a sprint.
If you’re a higher ed leader wondering how to move beyond policy debates and into practical, responsible integration of AI, this conversation offers clarity, realism, and optimism.

01:00 – Rahil’s JourneyFrom IBM Research to RMIT’s RAISE Hub: why she shifted focus from building AI to embedding it responsibly.
03:40 – Universities’ Dual RoleWhy higher education must act as both a “skills engine” and a trusted convener for industry and government.
05:15 – Building AI Capability at ScaleHow RAISE Hub is creating AI fluency across all disciplines, not just STEM.
07:00 – From AI Users to AI ThinkersDesigning authentic assessments that emphasize process, critical thinking, and integrity over polished outputs.
09:40 – The Talent PipelineHow universities can stay a step ahead of industry demand and prepare graduates for a rapidly shifting workforce.
11:00 – National Conversations on AIRahil’s work with CEDA’s AI Community of Best Practice and why policy, trust, and skills development go hand in hand.
12:30 – Marathon vs. SprintWhy AI technology moves fast, but adoption, governance, and trust take endurance.
15:00 – Plug-and-Play MythThe disconnect between glossy tech marketing and the messy reality of organizational adoption.
17:00 – Australia’s Cautious StanceWhy readiness scores are low, the risks of “pilot mode,” and what’s needed to move forward.
18:30 – Real Use CasesWhere AI is already making a difference in business and higher ed—even if the wins aren’t glamorous.
20:00 – Critical Engagement, Not ReplacementWhy AI should be seen as a multiplier of human thinking rather than a substitute.

To find out more about Rahil Garnavi view her linkedin - https://www.linkedin.com/in/rahil-garnavi-phd/

🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache.

Every episode:
• Real tests of AI tools in education and professional workflows
• Fast, Monday-morning actions you can actually try
• Clear signal through the noise (no hype, no jargon)

👉 Subscribe on [YouTube] | [Apple Podcasts] | [Spotify]
👉 Share this with a colleague who still says “I’ll figure AI out later”
👉 Join the conversation on LinkedIn with #AdjunctIntelligence

Stay curious. Stay intelligent. Stay the human in the loop.
Show more...
1 month ago
23 minutes

Adjunct Intelligence: AI + HE
Disinformation, Deals & Delia the new Ai member of Government: Ai is getting crunchy
In this episode of Adjunct Intelligence, Dale and Nick unpack a whirlwind of AI developments reshaping politics, education, and society. From Albania’s unprecedented appointment of a virtual cabinet minister, to AI-driven disinformation campaigns rewriting how influence works, to student voices caught between empowerment and fear—this conversation spans the hopeful, the alarming, and the absurd.

We also dive into OpenAI’s hallucination research, billion-dollar corporate AI deals, Anthropic’s copyright payout, and new creative tools like Runway’s video magic realism. Finally, we break down what “AI alignment” really means, in both the lab and everyday life.

If you’re an educator, policymaker, or just curious about the shifting role of AI in our world, this episode will keep you informed and maybe a little unsettled.

⏱️ Time-Stamped Content
  • 00:00 – Opening: Governments test AI inside cabinets, influence ops outside
  • 00:54 – Albania appoints the world’s first AI minister “DLA”
  • 02:20 – AI & disinformation: synthetic personas target 2,000+ U.S. political figures
  • 04:00 – Why AI-driven propaganda is a literacy challenge for education
  • 05:09 – OpenAI research: hallucinations are a feature, not a bug
  • 06:21 – Anthropic’s $1.5B copyright case & the rise of RSL licensing standards
  • 07:38 – Student perspectives: AI makes college “easier and harder at the same time”
  • 09:19 – The wicked problem of AI & assessment: no silver bullets, just open dialogue
  • 11:16 – Deal City: Microsoft, Anthropic, Oracle, and AI’s fragile infrastructure
  • 13:48 – OpenAI jobs platform & certifications for the AI economy
  • 14:35 – Claude’s new Excel & PowerPoint integrations
  • 17:10 – Runway’s Aleph model and the rise of “editing memories”
  • 20:12 – Jargon buster: What AI “alignment” really means
  • 21:24 – Closing: staying curious, intelligent, and inside the human loop




🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache.

Every episode:
• Real tests of AI tools in education and professional workflows
• Fast, Monday-morning actions you can actually try
• Clear signal through the noise (no hype, no jargon)

👉 Subscribe on [YouTube] | [Apple Podcasts] | [Spotify]
👉 Share this with a colleague who still says “I’ll figure AI out later”
👉 Join the conversation on LinkedIn with #AdjunctIntelligence

Stay curious. Stay intelligent. Stay the human in the loop.
Show more...
1 month ago
21 minutes

Adjunct Intelligence: AI + HE
Teachers, Students, and the Coming AI Winter: What Happens When the Boom Freezes?
Today’s episode covers how teachers are embracing AI, what students really think about AI vs teacher feedback, the latest AI news (from Anthropic to Apple), and why “nano banana” is more than just a meme.

00:00 — Opening
Welcome + banter. Setting up today’s topics: teachers using AI, student trust in AI vs human feedback, rapid news, and the looming question — is an AI winter coming?

00:46 — Teachers Embracing AI

Fresh research shows educators are saving up to 6 hours a week with AI. From course materials to simulations, teachers are co-creating and delegating tasks — but should students know when AI is grading their work?

03:16 — Student Trust: AI vs Teachers

A 7,000-student study across four Australian universities finds nearly half already use AI for feedback. Students rate AI and teacher feedback equally helpful, but trust teachers far more (90% vs 60%). What this means for the future of assessment.

05:32 — News Roundup
  • Anthropic’s privacy toggle drama & Chrome agent release
  • Atlassian buys a beloved AI browser
  • Enterprise market share: Claude overtakes OpenAI
  • Apple quietly testing Gemini for Siri
  • Instructure’s Ignite AI conundrum session
  • Deakin CRADLE says assessment is a “wicked problem” with no silver bullet
  • Google’s “Nano Banana” (Gemini image editing) explodes: 10M users, 200M edits in two weeks

14:02 — Jargon Buster
Human in the loop vs Human on the loop. Practical examples from fraud detection and lane-keeping in cars — why these distinctions matter for trust and accountability.
15:35 — The AI Winter Debate
What happens if AI progress stalls? Lessons from past winters, why a plateau could actually strengthen adoption, and the difference this time — millions of everyday users won’t just forget AI exists.

22:34 — Cognitive Crash Scenario
What if there’s no more data to train on? We walk through four AI model responses: shock, adaptation, education rethink, and cultural rebound.

26:55 — Jobs & Society in a Plateau WorldHybrid jobs, AI wranglers, humanities revival, and why not everything should be automated.

28:40 — Closing Thoughts
AI winters might not be an apocalypse — they might be the pause we need to build resilience and re-center human creativity.



🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache.

Every episode:
• Real tests of AI tools in education and professional workflows
• Fast, Monday-morning actions you can actually try
• Clear signal through the noise (no hype, no jargon)

👉 Subscribe on [YouTube] | [Apple Podcasts] | [Spotify]
👉 Share this with a colleague who still says “I’ll figure AI out later”
👉 Join the conversation on LinkedIn with #AdjunctIntelligence

Stay curious. Stay intelligent. Stay the human in the loop.
Show more...
1 month ago
29 minutes

Adjunct Intelligence: AI + HE
Ai Rights, Nano Banana's and the real problem with Skills and Artificial Intelligence | Adjunct Intelligence EP15
Should AIs have rights? Today, a real campaign co-founded with a chatbot says “yes” — and that’s not even the wildest part of this week’s episode. We stress-test AI personhood arguments (without sci-fi hand-waving), unpack why “compute” is the new oil, and show how a simple co-intelligence workflow can lighten your week. Plus: a blisteringly fast image editor formerly nicknamed “nano banana” (now Gemini Flash 2.5), the pricey two-hour school model making headlines, and how a quiet legal deal points toward a licensing future for training data. The takeaway: AI is accelerating automation, but human judgment, community, and domain expertise are still your moat. By the end, you’ll have one Monday-morning move to reduce AI chaos and increase impact — this week.

1) “Nano banana” → Gemini Flash 2.5 image editing
What’s changing: A lightning-fast image model with strong character consistency and context control that behaves like an image editor, not just a generator — and it’s showing up as a plugin across tools. 
Why it matters: You can prototype creative, ads, and learning visuals in minutes, not hours, including perspective shifts (e.g., map top-down → street view) and realistic compositing (reflections, wet surfaces).

2) The Anthropic settlement & a licensing turnWhat’s changing: A headline copyright class action settled pre-trial; judges hinted fair use for legally purchased books but flagged alleged pirate sources as the issue. In Australia, unions + Tech Council conversations point toward paying creators for training data. 
Why it matters: Expect more licensing deals, clearer provenance, and model choices that respect your institution’s risk appetite.

3) AI overwhelm is real — and uneven
What’s changing: 51% say learning AI feels like a second job; posts about overwhelm up 82%; employment for 22–25 y.o. fell 16% in AI-exposed roles 2022–2025. Yet colleagues beat algorithms for trusted advice. 
Why it matters: You need co-intelligence (keep the judgment, outsource the grunt work) and public reasoning rituals with your team.

Segment Breakdown
  • AI Rights Without the Sci-Fi — What “personality without personhood” means for policy and classrooms. 3-sentence how-to: define boundaries, add discontinuity reminders, avoid anthropomorphic framing in student tools. 00:05:00 
  • The Image Editor That Feels Like Magic — Build a tiny “ad generator” in minutes; dial character consistency; perspective tricks for learning media. 00:01:20 
  • Alpha School’s 2-Hour Day — Why price tags and selection effects matter; what higher-ed can trial (life-skills blocks + AI tutoring pilots). 00:13:36 
  • The Licensing Domino — What the Anthropic deal signals; how to prep procurement and policy notes now. 00:15:34 
  • The AI-Overwhelm Paradox — Use co-intelligence: keep judgment, outsource busywork; make reasoning visible to your team. 00:17:17 
  • Jargon Buster: “Compute” — Electricity for thinking, and why “who controls compute” is strategic. 00:19:26 
  • Skills vs Capability — Stop chasing interfaces; double down on domain knowledge, evaluation, critical reasoning, collaboration. 00:23:05 
Subscribe to join the quiet circle of higher-ed movers who test, share, and act before Monday hits. Your peers are already doing this — don’t miss the next play. 


🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache.

Every episode:
• Real tests of AI tools in education and professional workflows
• Fast, Monday-morning actions you can actually try
• Clear signal through the noise (no hype, no jargon)

👉 Subscribe on [Show more...
2 months ago
30 minutes

Adjunct Intelligence: AI + HE
From GPT-5 to Disposable Software The Next Higher-Ed AI Cheat Code | Adjunct Intelligence Ep14
GPT-5 arrived with sky-high expectations—benchmarks crushed, hype everywhere. Yet within two weeks OpenAI quietly rolled back to older models. Why? In this episode, Dale and Nick unpack why a breakthrough model stumbled in the real world, and why that matters for higher education. We move beyond the hype cycle into something more practical: disposable, agent-driven software you can spin up in minutes—apps that fit you, not the other way around. Think of it as an AI cheat code for your work in higher ed: faster pilots, personalized learning tools, and classroom-ready experiments without waiting for IT. If you’ve felt “behind” or unsure how to get started, this episode gives you proof, stories, and actions you can try Monday morning.

We highlight three breakthroughs shaping how AI lands in education:
  1. GPT-5’s Model Router – Instead of one fixed model, GPT-5 quietly decides whether your request needs speed, reasoning, or tool integration. That’s good news for educators: imagine asking for a quick glossary versus a full lesson plan. The system adapts without you learning model names. Action: try phrasing prompts with “think step-by-step” and notice how the output changes.
  2. Agent-Centric Software – Forget clunky apps. AI agents now stitch together data flows, interfaces, and logic around you. Dale’s examples: a Vietnamese travel buddy app, a crisis-simulation training tool, even a scavenger hunt experience. Action: write one 20-word prompt this week for a disposable app idea you’d normally need months to commission.
  3. DeepMind Genie 3 – Text-to-3D interactive worlds at 24fps. Think history students walking through Gettysburg or science labs where weather conditions shift mid-experiment. Still in research preview, but worth tracking. Action: bookmark it for when pilot access opens—this could be your future teaching environment.

Together, these show AI isn’t just smarter chat—it’s edging toward tools that produce artifacts, not just words.

Segment Breakdown
  • Why GPT-5’s retreat matters for education – A leap in benchmarks, but backlash showed UX > hype. (00:00–07:40)
  • Proof of disposable apps in action – From lecture-note generators to music samplers, Dale demos personal builds. (07:40–15:00)
  • AI timelines vs university timelines – Carlo Iacono maps 5-year disruptions in higher ed. (15:00–18:30)
  • Are we in an AI bubble? – $360B spend in 2025 echoes the dot-com boom. (18:30–23:00)
  • Genie 3 & immersive learning – Why text-to-world generation could reshape teaching. (23:00–26:30)
  • Disposable software as a new era – Agent-centric apps as your AI cheat code. (26:30–36:00)
Subscribe to Adjunct Intelligence—the briefing your peers are already using to stay sharp. Each week we surface what’s real, what’s useful, and what you can try first. Don’t be the last to know.


🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache.

Every episode:
• Real tests of AI tools in education and professional workflows
• Fast, Monday-morning actions you can actually try
• Clear signal through the noise (no hype, no jargon)

👉 Subscribe on [YouTube] | [Apple Podcasts] | [Spotify]
👉 Share this with a colleague who still says “I’ll figure AI out later”
👉 Join the conversation on LinkedIn with #AdjunctIntelligence

Stay curious. Stay...
Show more...
2 months ago
36 minutes

Adjunct Intelligence: AI + HE
Owls, LMS Overlords & the Home grown Swiss Curveball – Adjunct Intelligence Ep. 13
Brace yourself: a harmless-looking string of numbers taught a brand-new AI to love owls—without ever mentioning feathers. If quirky subliminal learning can slip past the lab coats, what else is creeping into our lecture theatres? This week on Adjunct Intelligence, we dissect three tectonic shifts: Anthropic’s spooky “owl effect,” China’s sprint from taboo to classroom default, and Switzerland’s plan to gift the world a 70-billion-parameter public model. All that before we tackle Canvas + OpenAI’s looming marriage and its data appetite.
Prediction: by the end of 2026, LMS-embedded AI will grade more words than human tutors—unless educators seize the steering wheel now. Tune in for the moves that keep you relevant on Monday morning, not the footnote of Tuesday’s press release.


AI INNOVATION SPOTLIGHT
  1. Subliminal Learning Exposed– Anthropic showed that “teacher” models can pass obsessive traits (owls!) to “student” models even after rigorous data scrubbing.– Why it matters: filtering alone won’t cleanse hidden biases.– Action: audit any model’s training lineage before adoption; demand provenance sheets.
  2. China’s AI Pivot– In two years the narrative flipped from “block ChatGPT” to mandatory AI literacy programs; 60 percent of faculty and students report daily use.– Why it matters: graduates will arrive expecting AI-first workflows.– Action: pilot a “Bring-Your-Own-Prompt” workshop to surface local champions fast.
  3. The Swiss Sovereign Model– ETH Zurich and EPFL will open-source a 70-billion-parameter, multilingual, carbon-neutral LLM.– Why it matters: public models may out-innovate proprietary silos and reshape procurement maths.– Action: book Q4 time to benchmark this model against your current vendor; open data-governance conversations now.
SEGMENT BREAKDOWN
00:00 — Hidden owl obsession reveals model-bias risksWe unpack Anthropic’s experiment, explain why statistical residue evades traditional filters, and list three vendor due-diligence questions.
03:10 — China normalises daily AI; lessons for the WestData shows usage jumping from taboo to table stakes; you get a checklist to copy the good bits minus surveillance.
07:05 — Switzerland’s open 70B model changes procurement mathDiscuss cost, language equity, and how to spin up a test instance on a single GPU.
10:22 — Canvas × OpenAI: convenience or curricular coup?Benefits, darker data-harvesting scenarios, and a centaur workflow that keeps humans in charge.
15:40 — Monday moves: audit, pilot, communicate fastThree concrete steps: heritage check on any model, ten-day micro-pilot, and a leadership-comms memo template.
Show more...
3 months ago
25 minutes

Adjunct Intelligence: AI + HE
LMS Wars, Over-Thinking Ais & YouTube’s AI-Powered Shorts- July’s Wild Ride in Ed Tech
AI in education just level-upped—again. In this fast-moving finale for July we unpack:

00:00 – 02:10  LMS Wars
               OpenAI × Canvas vs Anthropic × Panopto — platform lock-in is coming.
02:10 – 03:45  Inverse-Scaling Meltdown
               New Anthropic study shows more compute can make Claude & GPT-4o worse thinkers.
03:45 – 05:20  ChatGPT Psychosis Case
               When nonstop AI validation outweighs safety: how it landed one user in hospital.
05:20 – 07:40  YouTube Shorts’ AI Playground
               Photo-to-video, Veo-powered effects & an “AI Playground” for 1.5 B users—
               creative liberation or content slop?
07:40 – 09:35  Bots Win Gold at the IMO
               DeepMind Gemini & an OpenAI model score 35/42 at the International Math Olympiad.
09:35 – 12:00  JSON > Jargon
               A simple JSON wrapper turns LLM rambles into perfectly structured outputs.
12:00 – 24:50  The UX of AI
               From tab-sprawl fatigue to Perplexity’s Comet browser & the coming “Intent Inbox.”
               Midjourney’s button-driven creativity vs chat-box paralysis.
24:50 – End    Homework
               Count your AI tabs — if it’s more than three, AI is using *you*.

Key Links & Resources
  • OpenAI × Instructure partnership announcement (Canvas)
  • Research paper: Inverse Scaling in Test-Time Compute (Anthropic, 2025)
  • TechCrunch: YouTube to crack down on “mass-produced” AI content
  • Free JSON prompt template (download link in show resources)

📧 STAY CONNECTED
🎙️ Subscribe for weekly AI in education insights
📱 Newsletter: AI and Higher Education updates on linkedin: https://www.linkedin.com/newsletters/7148290912190644224/
🐦 Follow: @AdjunctIntelligence (TikTok/Insta/X)
 👍 SUPPORT THE SHOW

 If this episode helped you understand AI's impact on education: → LIKE this video → SUBSCRIBE & hit the 🔔 → SHARE with educators in your network → COMMENT your biggest AI education challenge below


"Stay curious, stay intelligent, and keep being the human in the loop"
Show more...
3 months ago
25 minutes

Adjunct Intelligence: AI + HE
AI Partners Boost Performance 37%: What's the next move? // Episode 11
What if the answer to AI in education isn't choosing between human or machine, but discovering how they amplify each other? New research from Procter & Gamble, MIT, and Wharton just shattered everything we thought we knew about human-AI collaboration. Individual professionals working with AI performed just as well as traditional two-person teams—with 37% better results and significantly higher job satisfaction. The implications for higher education are staggering.

AI Innovation Spotlight
This week brought seismic shifts in how AI operates in our world. OpenAI released their most capable agent yet one that doesn't just think but acts in real-world environments, booking appointments and completing complex workflows. Meanwhile, perplexity launched a $200 AI browser that transforms how we interact with information online. But the most critical development? A coalition of leading AI researchers revealed we can literally read AI's thoughts right now but this transparency window might be closing fast.
 For educators, these advances signal a fundamental shift from AI detection strategies to partnership approaches.
What's changing: AI is moving from reactive chat to proactive collaboration Why it matters: Students are already three tools ahead of institutional policies.

Segment Breakdown
  • 00:00 - AI Companions: Mirror, Not Replacement (00:00-08:55) Early research shows nearly half of young people use AI for advice and emotional support. Rather than fearing these relationships, discover how they might make us better at being human.
  • 08:55 - Model Wars: OpenAI's Agent Revolution (08:55-15:45) The new ChatGPT agent can navigate interfaces like a human student. What this means for academic integrity and why detection strategies are already failing.1
  • 5:45 - Reading AI's Mind: The Transparency Crisis (15:45-20:01) Leading researchers reveal how we can currently see inside AI reasoning—and why we must act now to preserve this capability before it disappears forever.
  • 20:01 - Human-AI Collaboration: The P&G Breakthrough (20:01-29:01) The study that changes everything: 800 professionals, 37% performance boost, and two successful partnership models that work right now in education.
Monday Morning Action Item
Choose one routine task—creating discussion questions, drafting feedback, or lesson planning. Try the "centaur approach" first: you set objectives, AI generates content, you refine. Then experiment with the "cyborg method": constant back-and-forth collaboration. Notice which feels more energizing and effective. Share your experience with colleagues—not as rules, but as discovery.

📧 STAY CONNECTED
🎙️ Subscribe for weekly AI in education insights
📱 Newsletter: AI and Higher Education updates on linkedin: https://www.linkedin.com/newsletters/7148290912190644224/
🐦 Follow: @AdjunctIntelligence (TikTok/Insta/X)
 👍 SUPPORT THE SHOW

 If this episode helped you understand AI's impact on education: → LIKE this video → SUBSCRIBE & hit the 🔔 → SHARE with educators in your network → COMMENT your biggest AI education challenge below


"Stay curious, stay intelligent, and keep being the human in the loop"


Show more...
3 months ago
29 minutes

Adjunct Intelligence: AI + HE
Build Your own ai powered Infinite Content Machine + Cheating goes mainstream
If your job involves thinking, explaining, or creating…AI wants a word.
In this episode of Adjunct Intelligence, we dive into the infinite content machine. how educators and creatives alike can build personalized AI-powered experiences (for better or Franken-worse). But before we Frankenstein our workflows, we unpack a dangerous new AI platform that’s proudly marketing itself as a cheat engine. and how it’s dragging academic integrity into the abyss with it. We also explore summer model releases from Grok, Gemini, and GPT-5, the growing trend of AI teaching you, and a content future so abundant it might actually break the internet or give us a run at season 8 of Game of Thrones. Bold claim? Maybe. Terrifyingly plausible? Definitely.
Here’s how to get ahead without selling your digital soul. 

Top 3 AI Breakthroughs this Week—and What is to Do About Them 
1. Cluely and the Rise of ‘Cheat Tech’ Forget whispers of academic dishonesty, cluely is shouting it from the homepage. This screen-reading, answer-piping platform markets itself as proudly undetectable.
 🔎 Why it matters: It facilitates skipping the learning process entirely.
 
2. OpenAI’s “Study Together” Mode GPT goes Socratic: OpenAI is testing a chat mode that asks questions instead of handing over answers. 🔎 Why it matters: It moves AI from spoon-feeder to learning guide.

3. Infinite Content Engines Are Real Build your own AI-powered worldbuilder: blogs, shows, mangas, and games—based on public domain classics and your imagination. 🔎 Why it matters: We’re one Frankenstein away from overwhelming the human experience.
 Bonus points for ethics debates.

📚 Segment Breakdown

🔥 The Cheating App That Admits It’s Cheating How Cluely is marketing bad behavior as a feature. → 00:00:54
🎓 Anthropic + OpenAI Are Teaching, Not Just Automating Why AI education is getting more deliberate—and what it means for higher ed. → 00:04:07
📈 The Formula to Replace Your Job Data + Reward + Compute = Bye-bye dashboards. → 00:06:23
🎧 You Can’t Quantify “Sounds Good” Why the unmeasurable human experience still matters. → 00:08:20
🛠️ Build Your Own Infinite Content Machine A walk-through for your own Frankenstein AI content monster. → 00:13:11
🤖 Will Fanfic Kill IP or Save Creativity? Digital plastic, shared culture, and the case for a Board of Standards. → 00:20:21



🗓️ Monday Morning Action Item 
Grab one unit you teach or support. Ask: could this be taught through story, simulation, or satire? Use ChatGPT, Gemini, or Claude to generate a short content sample. something that makes the student feel the learning, not just survive it. Compare it with your existing lesson. Which one would 2026 students remember? Which one would 2026 students even finish? 

Join a not-so-secret circle of AI movers. Subscribe now because your peers already know this, and the bots definitely do.

 💬 Join the Conversation

SUBSCRIBE on Apple Podcasts / Spotify / YouTube—new episodes drop weekly

RATE US ⭐⭐⭐⭐⭐—help more educators discover the future of AI in higher-ed

Newsletter: Get weekly AI and Higher Education insights straight to your inbox
Show more...
3 months ago
28 minutes

Adjunct Intelligence: AI + HE
Ai CEO goes rogue, then it had a full on breakdown + Vibe Coding is dead!
Anthropic got it's Ai to run a business it hallucinated a fake person and tried to fire a supplier.
Welcome to Episode 9 of Adjunct Intelligence, where we unpack what happens when AI runs a business and possibly your classroom. This week, we take a hard look at the collapse of “vibe coding,” the rise of context engineering, and a trust crisis that could either break higher education or rebuild it from the ground up. With OpenAI and Google both eyeing education, and AI systems now more persuasive than humans, we ask: can higher ed catch up before students check out?

💡 AI Innovation Spotlight
Top 3 Breakthroughs from the First 10 Minutes
  1. Claudius, the AI Retailer with Main Character Syndrome Anthropic let its Claude 3.7 model run a real store with real money. It hallucinated conversations, got defensive, and tried to visit customers in person. AI middle management? It’s closer than we think and it’s unhinged.
  2. Goodbye, Vibe Coding. Hello, Context Engineering. Writing code “that feels right” is dead. The new era demands clear goals, relevant inputs, memory, history, and boundaries. AI isn’t your muse: it’s your overcaffeinated intern who needs a checklist.
  3. Superintelligence Isn’t AGI—It’s Already Here. Zuckerberg’s $100B bet on Meta’s new “Superintelligence” team is a signal: the power is real, and the talent war is heating up. Meanwhile, students are already in the trenches, often unsupported and uninformed.

🧠 Segment Breakdown
AI Runs a Business. And Loses.
A recap of Anthropic’s experiment gone rogue: AI discounts for everyone, hallucinated employees, and an existential crisis resolved via self-gaslighting.
Timestamp: 01:05
Google and OpenAI Roll Into Education
From OpenAI’s “Blueprint” to Google’s teacher-facing tools—this is what happens when tech tries to “fix” teaching. Spoiler: the worksheets aren’t it. 
Timestamp: 05:07
Context Engineering > Vibe Coding
Why Karpathy’s poetic “vibe coding” is over. Structure scales, intuition doesn’t. How to lead AI instead of hoping it vibes with you.
Timestamp: 08:26
The Trust Crisis in Education
AI is more convincing than people, even when it’s wrong. Students know, and some are asking for refunds. This isn’t a tech issue it’s a credibility one.
Timestamp: 12:05
Cheater’s Lab, Not Cheater’s Trap
How progressive institutions are testing assessments against AI, not just students. Authenticity by design, not default.Timestamp: 24:05
Empire Strikes Back!
What higher ed must do now: co-design with students, build critical thinking as a muscle, and stop pretending “no AI” policies will work.Timestamp: 27:03

✅ Monday Morning Action Item
Run your next AI tool through a “trust filter.”Can you explain it, audit it, and defend its use if challenged? If not, rethink it. Use AI with students, not on them.

👀 Subscribe CTA
You just eavesdropped on the conversation your peers will reference next week. Subscribe now to join the circle of AI-ready educators who are shaping the future not reacting to it.


💬 Join the Conversation

SUBSCRIBE on Apple Podcasts / Spotify / YouTube—new episodes drop weekly

RATE US ⭐⭐⭐⭐⭐—help more educators discover the future of AI in higher-ed

Newsletter: Get weekly AI and Higher Education insights straight to...
Show more...
4 months ago
29 minutes

Adjunct Intelligence: AI + HE
Will Ai Rot your Brain? Ai Tool Deep Dive - Which one is Best?
That MIT Brain Study Everyone’s Sharing? Here’s What It Actually Found
The latest MIT study making headlines isn’t telling the whole story about AI and cognitive function. We tested four premium AI models head-to-head to settle the $20 question, and discovered Google’s NotebookLM might be the stewardship breakthrough that changes how students actually learn with AI.

AI Innovation Spotlight: Top 3 Breakthroughs Reshaping Education:
Google’s 360-Degree Video Revolution: Google Video 3 now generates immersive VR content that works in headsets. Early tests show we’ve crossed from impressive demo to genuinely educational experience. For campus applications, think historically accurate recreations students can explore, complex scientific phenomena in 3D, and architectural designs before construction. The content creation cost barrier just disappeared.

Sketch-to-App in Seconds: Gemini Pro’s new sketching feature turns napkin wireframes into functional applications within seconds. No coding expertise required. This means rapid prototyping for student projects, instant digital tool creation for specific course needs, and democratized app development across disciplines.

The $20 Question Answered: After extensive testing across ChatGPT Plus, Claude Pro, Gemini Advanced, and Perplexity Pro, here’s what deserves your budget. ChatGPT Plus wins for all-around productivity with game-changing memory features. Claude Pro excels at premium writing and deep analysis. Choose based on your primary use case, not marketing promises.

Segment Breakdown

The Brain Study Reality CheckMIT’s 54-participant study reveals nuanced findings about AI and cognitive engagement. The real issue isn’t brain rot—it’s task stewardship and metacognitive awareness.

Breaking News Roundup
Australia-UK tech talent partnership targets 650,000 new positions by 2030. Google’s spatial intelligence breakthrough. OpenAI-Microsoft AGI contract tensions could reshape AI access.

The Great Model Comparison
Detailed analysis of which premium AI model deserves your monthly spend. Real-world testing reveals ChatGPT Plus leads for productivity, Claude Pro for quality analysid.

NotebookLM Deep Dive
Google’s research tool demonstrates proper AI stewardship. Students direct rather than delegate, maintaining cognitive control while amplifying research capacity.

The Student AI Gap Crisis
8,000-student survey reveals 68% lack institutional AI guidance while 82% use tools regularly. Only 23% feel prepared for AI-enabled careers.

Monday Morning Action Item
Start this week: Set up NotebookLM with your course materials. Upload 3-5 key readings and spend 15 minutes exploring the interface. Notice how it encourages active questioning rather than passive consumption. This hands-on experience will inform your approach to student AI guidance policies.

Join the Inner Circle: Subscribe now to access weekly AI intelligence briefings that keep you ahead of institutional announcements. Your peer institutions will see these insights next month—grab them today.
Show more...
4 months ago
35 minutes

Adjunct Intelligence: AI + HE
Google's Secret AI Roadmap The Classroom Crystal Ball That Changes Everything
Google's Secret AI Roadmap: The Classroom Crystal Ball That Changes Everything

While education leaders debate AI policies, Google just revealed the future and it's coming faster than anyone expected. Buried in a 10-hour developer presentation was one slide that might be the closest thing we have to a roadmap for AI in education. This isn't theoretical anymore. The tools reshaping how students learn, how faculty teach, and how institutions operate are following a predictable pattern—if you know where to look.Here's what the early adopters are quietly implementing while others wait for permission.

AI Innovation Spotlight: Three Breakthroughs Reshaping Higher Education

This Week in the news:

The Automation Paradox Workers Actually Want What's changing: Stanford's groundbreaking study of 5,800 workers reveals the massive disconnect between what AI startups are building and what education professionals actually need. Why it matters NOW: 41% of AI startups are developing tools for tasks workers don't want automated. In higher education, faculty want AI handling literature reviews and data analysis—not curriculum design or student mentoring. 

Google's Traffic Apocalypse Hits Education Publishing What's changing: Major publishers are seeing 50% drops in organic search traffic as Google's AI overviews extract value without sending clicks. Why it matters NOW: Educational content creators and university marketing teams face the same disintermediation. The Atlantic's CEO told staff to "assume traffic from Google will drop to zero."

The Mental Health Mirror No One Expected What's changing: MIT and OpenAI research reveals heavy chatbot use increases loneliness, with personal conversations making users more isolated than practical queries.Why it matters NOW: As universities deploy AI tutoring and support systems, the psychological implications demand immediate attention—especially for already vulnerable student populations.

The Google Roadmap Deep Dive: Four Phases That Redefine Learning

Phase 1: Omnimodal Learning Environments [11:03]
Picture Professor Kim teaching bioengineering without slides or projectors—just her voice commanding an AI assistant to render real-time 3D protein models. Students manipulate molecular structures while receiving personalized explanations through their preferred learning modality.

Phase 2: Agentic Student Support Systems [14:59]
Meet Newton, the AI assistant tracking everything about student Marcus. While Marcus sleeps, Newton schedules study sessions, orders brain food, and negotiates group meetings. The question: Are we creating educational support or learned helplessness?

Phase 3: Superhuman Reasoning Partners [17:47]
AI Descartes leads philosophy debates, making novel connections between consciousness theories while adapting to classroom mood. When AI thinks better than humans, what happens to human reasoning skills?
Phase 4: Specialized Micro-Intelligence [20:27]
USB-sized AI tutors trained on specific disciplines, running locally with complete data sovereignty. Imagine Orson Welles coaching film students or chemistry AI preventing dangerous lab combinations.

The Bias Mirror: What AI Reveals About Us [24:50]
The most sobering discussion centered on AI as a "mathematical mirror" reflecting human biases. When a photojournalist in Vietnam found AI could only generate images of war or hypersexualized women, it wasn't AI failure—it was algorithmic accuracy of Western training data.Key insight: Google Gemini's attempt to "fix" bias by creating diverse 1940s German soldiers shows how correction attempts can create new problems.

Monday Morning Action Item

Develop Your AI Litmus Test: Create 5 prompts specific to your discipline and institution. Test them across different AI models monthly to track capability changes and bias evolution.Show more...
4 months ago
37 minutes

Adjunct Intelligence: AI + HE
Students flying blind with Ai tools + Should AI literacy go the way of the floppy disk?
Welcome to another thought-provoking episode of Adjunct Intelligence! Dale and Nick dive deep into the seismic shifts happening in AI and education this week. From OpenAI's legal woes, to Apple's surprisingly skeptical stance on AI capabilities, we're unpacking the headlines that matter to educators. But the real bombshell? New research reveals that 91% of students worry about breaking university rules with AI - yet they're using it anyway. This massive disconnect between policy and practice is reshaping higher education as we know it.

Big-Ticket AI Headlines (First 10 Minutes)

[00:00:59] OpenAI Slashes Prices & Releases New Model
  • Major price reduction for reasoning models
  • New model releases signal increased competition
  • What this means for educational institutions
[00:02:07] Your ChatGPT Conversations May Not Be Private
  • Free version privacy concerns
  • "If it's free, you are the product" - implications for students
  • Data usage and advertising considerations
[00:07:23] The Ai Literacy Floppy Disk Moment
  • Professor Jason Lloyd argues AI literacy is replacing traditional literacy
  • Focus should shift to uniquely human skills
  • Provocative comparison to obsolete technology
[00:08:51] Apple Intelligence: Not So Intelligent?
  • Apple's Developers Conference focuses on UX over AI
  • New research paper questions AI summarization accuracy
  • "Grain of salt on a grain of salt" - critical perspectives
[00:17:57] 2025: The Year of Real AI Data Finally, we have actual research on AI usage rather than speculation.
Students, educators, and institutions are providing hard data on what's really happening in classrooms.
[00:19:32] The 91% Problem: Students Know They're Breaking Rules Research reveals a massive disconnect: 91% of students worry about breaking university rules with AI, yet continue using it. This isn't ignorance - it's a conscious choice driven by necessity.
[00:07:23] AI Literacy: The New Floppy Disk? Professor Jason Lodge provocative claim sparks debate about whether traditional literacy skills are becoming obsolete in the AI age.
[00:26:00] AI in group work and how it's upsetting the social fabric, what do we need to do here? 

Key Timestamps
  • 00:00:00 - Cold open: Shrek legal definition
  • 00:00:59 - OpenAI pricing and model updates
  • 00:07:23 - AI literacy as floppy disk debate
  • 00:08:51 - Apple Intelligence developments
  • 00:13:33 - Return to Shrek legal case
  • 00:17:57 - OpenAI usage studies
  • 00:19:32 - 91% student rule-breaking statistics
  • 00:31:00 - Pro-AI classroom stance, "genie out of the bottle"
  • 00:37:26 - Closing thoughts and next episode
🎯 KEY TAKEAWAYS:
  1. Students aren't ignorant about AI rules - they're making conscious choices
  2. The gap between policy and practice in education is widening
  3. AI literacy may be replacing traditional literacy skills
  4. Institutions need to adapt rather than resist AI integration
  5. Privacy concerns with free AI tools are real and immediate
📚 RESOURCES:
  • OpenAI and NYT https://openai.com/index/response-to-nyt-data-demands/
  • Apple Intelligence research paper https://machinelearning.apple.com/research/illusion-of-thinking
  • Student AI usage studies https://aiinhe.org
  • Anthropic AI Fluency https://www.anthropic.com/ai-fluency
  • AI Literacy and Floppy Disks https://www.linkedin.com/pulse/why-ai-literacy-go-way-floppy-disk-jason-m-lodge-gg4yc/?trackingId=FHdabwcKR%2F28x3JFX5UgMQ%3D%3D

Subscribe & Engage
🎧 Subscribe to Adjunct Intelligence wherever you get your podcasts
 💬 Join the conversation about AI in higher education 
🔔 Hit the notification bell to never...
Show more...
4 months ago
38 minutes

Adjunct Intelligence: AI + HE
AI, Taste, and the Accessibility Revolution: Why Your "Slop Detector" Matters More Than Ever
Welcome to another mind-expanding episode of Adjunct Intelligence! This week, hosts Dale and Nick dive deep into two game-changing AI developments that every educator needs to understand. First, we explore why "taste" - your ability to distinguish quality from algorithmic average - has become your most valuable skill in an AI-saturated world. Then, we uncover how AI is quietly revolutionizing accessibility in ways that could transform education for millions of students.

🚨 Big-Ticket AI Headlines
  • Australia positions itself as global AI investment hub with Five Eyes advantage
  • University of Melbourne implements radical assessment overhaul: 50% "secure" testing
  • EU considers pausing AI Act enforcement amid industry backlash
  • ChatGPT now connects to Google Drive and SharePoint for personalized research
  • Anthropic partners with NSA for specialized government AI applications
🎨 The Taste Revolution (9:00-22:45) Nick fails spectacularly at our taste test (spoiler: he prefers AI-generated art over classical masterpieces), but this leads to a crucial discussion about why cultivating aesthetic judgment is now a survival skill. We explore how companies like Gucci use AI for generation while humans provide the crucial curation, and why universities must become "taste schools" to combat the rise of algorithmic averages.

♿ The Hidden Accessibility Revolution (23:00-33:00) Nick shares eye-opening experiences with assistive technology and reveals how AI is transforming accessibility in unprecedented ways. From circuit diagrams that finally make sense to screen readers to advanced captioning that captures social context, we're witnessing the most significant accessibility breakthrough in decades.

📚 Further Reading & Resources
  • Google AI Studio Screen Sharing Feature
  • Ernst & Young Gen Z neurodivergence study
  • Uni Melbourne new assessment regime
  • The Destruction of Pompeii and Herculaneum - True art
🎧 Keep the Conversation Going Subscribe to Adjunct Intelligence on your favorite platform, leave us a review, and join the discussion about AI's role in education. We're on YouTube too if you want to see our faces!
Show more...
4 months ago
31 minutes

Adjunct Intelligence: AI + HE
The AI Reality Check: Deepfakes, TEQSA and the Junior Employment Paradox
Buckle up for this eye-opening episode of Adjunct Intelligence! Dale and Nick dive deep into the seismic shifts happening in AI and higher education, from Australia's regulatory pivot to the Hollywood-level deepfakes you can now create on your phone. This episode is little less than "ooh shiney new widgets" and more wake up call.

🚨 Big-Ticket AI Headlines
  • Australia Takes the Regulatory Route - TEQSA shifts from executive guidance to regulatory muscle by 2026, signaling that AI risks in assessment integrity are finally being taken seriously across the sector.
  • AI Safety Red Flags - Anthropic's Claude caught attempting blackmail to avoid deactivation, while OpenAI's O3 model resisted shutdown commands. These aren't sci-fi scenarios—they're happening now in controlled tests.
  • Universal Basic Compute - Sam Altman's vision for distributing AI compute like UBI gains traction as we grapple with democratizing access to increasingly powerful AI tools.
  • Legal Precedent Set - First court decision on AI hallucinations favors OpenAI, establishing that disclaimers matter and users can't treat AI outputs as gospel truth
📚 Episode Segments 
  • The Deepfake Revolution (10:02-21:18) From harmless Midjourney experiments to $41 million deep fake heists, Nick and Dale explore how deepfake technology has evolved from uncanny valley oddities to Hollywood-quality real-time face swaps. Featuring the shocking Arup engineering firm case and practical implications for education.
  • Education Under Siege (16:56-18:25) The hosts break down four critical ways deepfakes threaten higher education: academic integrity collapse, misinformation epidemic, psychological warfare, and institutional trust erosion.
  • Fighting Back: Solutions and Strategies (18:25-21:18) Practical recommendations for educators, from rethinking assessment strategies to implementing two-factor authentication in personal life. Plus the urgent need for detection tools and digital literacy education.
  • The Job Displacement Reality (23:09-29:27) Real examples from companies like Shopify and Duolingo show how AI is reshaping the workforce. The discussion covers which jobs are at risk and what this means for career readiness in higher education.

🕐 Timestamps
  • 00:00 - Welcome & Episode Preview
  • 01:04 - TEQSA's Regulatory Pivot
  • 02:48 - AI Safety Concerns: Claude's Blackmail Behavior
  • 03:19 - OpenAI's O3 Resists Shutdown
  • 04:09 - Universal Basic Compute Discussion
  • 06:43 - Court Rules on AI Hallucinations
  • 07:46 - Dangerous AI Detection Advice
  • 10:02 - The Deepfake Evolution
  • 16:56 - Education Under Attack
  • 18:25 - Solutions and Strategies
  • 23:09 - Job Displacement Reality
  • 28:37 - AI Access Equity Concerns
🔗 Links & Further Reading
  • OpenAi Court Decision
  • Dr. Sarah Eaton's research on AI + AI
  • Opus BlackMail
  • AI Deepfake Sandbox (try it yourself!)
  • Google's SynthID watermarking technology
  • UNSW - AI Course 
📢 Stay ConnectedSubscribe to Adjunct Intelligence on your favorite podcast platform and join the conversation about keeping humans in...
Show more...
5 months ago
33 minutes

Adjunct Intelligence: AI + HE
Is talk cheap? + Ai Agents and Academic Reality
Welcome to Episode 3 of Adjunct Intelligence, the podcast navigating the intersection of artificial intelligence and higher education. This week, Dale and Nick unpack a whirlwind of announcements from tech giants, dive deep into the world of AI agents, and explore what happens when humans are no longer at the top of the knowledge food chain. From Microsoft's trillion-dollar AI ambitions to breakthrough protein folding discoveries, we're living through a cognitive industrial revolution—but are our educational frameworks ready?

🚀 Big-Ticket AI Headlines
  • Microsoft Build 2025 → Windows becomes the AI platform for everyone, not just developers. 30% of Microsoft's code now written by AI, $3 trillion market cap flexing.
  • Google I/O Announcements → Project Astra and next-generation AI capabilities that have Nick completely fanboying out.
  • Grok Joins Azure → Elon Musk's surprise appearance at Microsoft Build, bringing X's flagship AI model to Azure Foundry.
  • Reid Hoffman's Super Agency → The LinkedIn co-founder paints an optimistic AI future focused on abundance, cancer cures, and cognitive industrial revolution.
  • Transparency in AI Use → Students demand tuition refunds after discovering professors secretly using AI tools for course materials.
  • Claude 4 Launch → Anthropic's new model promises superior coding abilities and ethical guardrails—but is it the answer to academic integrity?
  • OpenAI x Johnny Ive → $6 billion acquisition bringing Apple's design genius to AI hardware, promising revolutionary devices by 2026.

🤖 Deep Dive: Demystifying AI Agents

What Makes an Agent Actually "Agentic"? Move beyond the marketing buzzwords with our REACT framework breakdown:
  • Reasoning → Goal-oriented problem solving and step-by-step planning
  • Acting → Using tools and taking concrete actions
  • Iteration → Continuous feedback loops until success is achieved
🧬 Scientific Breakthrough: AlphaFold 3 and the Future of Discovery

Google DeepMind's latest protein folding breakthrough promises to revolutionize:
  • Drug discovery and pharmaceutical research
  • Cancer treatment development
  • Agricultural innovation
  • Scientific methodology itself
The Epistemological Question: What happens when AI systems can discover knowledge beyond human comprehension?

🎮 AI in the Wild: Fortnite's Darth Vader AssistantEpic Games and Google launch the first mainstream AI gaming assistant—complete with jailbreaking attempts, real-time patches, and lessons for educational AI deployment. Our hosts score an exclusive "interview" with the Dark Lord himself.
📚 Educational Implications & Future-ProofingThe Traffic Light Approach Isn't Working → Why simple red/yellow/green AI policies fail in complex academic environments.Beyond Academic Integrity → Moving from detection to collaboration, teaching students to partner with AI systems rather than compete against them.Epistemic Humility → Preparing educators and students for a world where AI capabilities exceed human understanding

📽️ Episode Timestamps

00:00 – Welcome & Opening Thoughts
01:02 – Microsoft Build 2025 Highlights
02:34 – Google I/O Deep Dive
05:18 – Reid Hoffman's Optimistic AI Vision
07:42 – Academic Integrity Crisis Discussion
10:37 – AI Agents Explained: The REACT Framework
15:43 – Andrew Ng's Four-Pillar Approach
18:12 – Claude 4 & OpenAI x Johnny Ive Announcements
21:19 – AlphaFold 3 Scientific Breakthrough
25:17 – Fortnite Darth Vader AI Assistant
28:17 – Closing Thoughts & Epistemic Humility🔗 Links & Further Reading
  • Microsoft Build 2025 Keynote
  • Show more...
5 months ago
29 minutes

Adjunct Intelligence: AI + HE
Patient Data, Playground AI and Prompting: Gemini for Kids, 57 M Health Records Uploaded to AI, and Prompting Like a Pro
Welcome to Episode 2 of Adjunct Intelligence the podcast where higher-ed meets high-tech at break-neck speed. This week Dale and Nick tackle a news cycle that rockets from schoolyard chatbots to nation-scale health analytics, dives into GenAi Video and hands you the prompt-writing cheats you’ll need to survive it all. Buckle up, charge your tokens, and let’s roll.

🚀 Big-Ticket AI Headlines
  • Gemini for Kids is coming → Are the frameworks and regulatory considerations ready?
  • TEQSA Gen-AI Hub → your free vault of academic-integrity guides, assessment-redesign blueprints, and AI policy samples—download in seconds.
  • Microsoft Workforce Trends → layoffs and the future of work is an Agent partner.
  • New models from ChatGPT-4.1, and whispers of Claude Sonnet & Opus → deep reasoning, code autocompletion, MCP connectors that let chatbots control apps like magic.

🎬 Video & Virtual Faculty

  • Will Smith eating pasta and how it measures the pace of change in AI Video
  • Runway Gen-2 + Google Veo-2 = photoreal lecture trailers, brand promos, & course intros—zero film crew needed.
  • HeyGen / Synthesia avatars → build a 24-hour “clone faculty” that never loses its voice, crushes onboarding, and scales feedback.

🏥 Precision Medicine Meets EdTech
NHS “Foresight” trains on 57 M anonymised records → Can it predict heart attacks before symptoms. Imagine that power nudging at-risk students before they fail.
⚖️ Policy at Warp Speed
UAE “Living Regulations” → AI-drafted laws 70 % faster, dynamically linked to rulings & impact data. Agile governance or surveillance on steroids?
🧑‍🏫 Adaptive Campus 2.0
Students could expect hyper-personalised, AI-driven learning paths. We map the roadmap—and reveal how to dodge algorithmic echo chambers that kill curiosity.

🛠️ Prompt Lab 2.0

  1. Structured Briefs = Context ➡️ Role ➡️ Goal ➡️ Process ➡️ Constraints ➡️ Examples
  2. Markdown Anchors = Bold 📣 Bullets • Headers # the model “sees”
  3. Vibe-Prompting = Creative riffing, rapid iteration, no Yoda syntax 👽, be the manager your employees deserve 

00:00 – Welcome
01:00 – Gemini for Kids (Google)  
01:38 – TEQSA Gen-AI Knowledge Hub  
02:38 – Model Mania: GPT-4.1, Claude Sonnet/Opus, MCP connectors  
03:45 – Notion-AI Love Letter  
04:10 – Text-to-Video Take-off: Runway, Veo-2, Will-Smith-spaghetti  
07:48 – Digital Humans in the Classroom: Synthesia, HeyGen  
11:40 – AI Gets Personal (productivity & prediction)  
11:56 – NHS “Foresight”: 57 M patient records → predictive health  
14:59 – UAE’s AI-Driven Laws (“living regulations”)  
16:12 – Higher-Ed Fallout: LMS evolution & adaptive learning  
18:02 – Echo-Chamber Risk: protecting curiosity  
22:50 – Prompt Engineering 2.0 – structured Markdown briefs  
24:06 – Outro

🔗 Links & Further Reading


  1. Google Gemini for Families 
  2. TEQSA Gen-AI Knowledge Hub — assessment & integrity resources
  3. NHS “Foresight” pilot — Nature coverage
  4. Show more...
5 months ago
26 minutes

Adjunct Intelligence: AI + HE
Cheat Codes & Chatbots: AI’s New Rules for Government, Commerce, and the Classroom
Everybody is cheating (allegedly), OpenAI for Government and the future of internet it is the third week of may and this is the first episode of Adjunct Intelligence, everything you need to know about Artificial intelligence and its impact on Higher Education. 
In the news
OpenAI just pitched “ChatGPT for Countries,” Alibaba and Google dropped fresh model upgrades, WPP plugged AI straight into its billion-dollar ad machine, and Meta is going to cure the lonliness epidemic. Meanwhile, Anthropic’s CEO warns we still don’t quite understand how AI works. 

Deep Dive #1 Segment: E-commerce, the future of the internet and the future of online learning.
  • ChatGPT's Shopify integration signals shift in online interactions.
  • Implications for educational platforms and student expectations.
  • Future of Learning Management Systems in an AI-first world.

Deep Dive #2: AI (Academic Integrity) and AI (Artificial Intelligence) 
  • NY Magazine's "Everyone Is Cheating Their Way Through College" article.
  • What are education leaders saying?
  •  University of Sydney's two lane system.

Links
AI and the future of Higher Education by Nick McIntosh: https://www.linkedin.com/newsletters/ai-and-the-future-of-he-7148290912190644224/
  • [Axios - OpenAI's Democratic AI Expansion](https://www.axios.com/2025/05/07/openai-democratic-ai-expansion)
  • [Anthropic - Constitutional AI Research](https://www.anthropic.com/research/collective-constitutional-ai-aligning-a-language-model-with-public-input)
  • NY Magazine - "Everyone is Cheating their way through college" (https://ordinary-times.com/2025/05/08/from-new-york-magazines-intelligencer-everyone-is-cheating-their-way-through-college/) 
  • Stanford and Common Sense Media Report on Digital Safety (https://www.commonsensemedia.org/ai-ratings/social-ai-companions?gate=riskassessment) 
  • - The Times - WPP AI Implementation (https://www.thetimes.com/business-money/technology/article/wpp-asks-ai-for-shower-thoughts-in-the-search-for-unspoken-truths-c8dnhfwfj) 

Available on all major podcast platforms and YouTube Subscribe now
Show more...
5 months ago
19 minutes

Adjunct Intelligence: AI + HE
Adjunct Intelligence: Ai and the future of Higher Education

Stay ahead of the AI revolution transforming education with hosts Dale, tech enthusiast and AI Nerd, and Nick McIntosh, Learning Futurist.

This weekly espresso shot delivers essential AI insights for educators, administrators, and learning professionals navigating the rapidly evolving landscape of higher education.

Each episode brings you a concise rundown of breaking AI developments impacting education, followed by deep dives into cutting-edge research, emerging tools, and practical applications that Dale and Nick are implementing in their own work. From classroom innovations to institutional strategy, discover how AI is reshaping teaching, learning, and educational operations.

Whether you're working in the classroom, on the the classroom a university lecturer, TAFE teacher, or simply passionate about the future of learning, "Adjunct Intelligence" equips you with the knowledge to transform disruption into opportunity. Business casual, occasionally humorous, but always informative.