People are often described as the largest asset in most organisations. They are also the biggest single cause of risk. This podcast explores the topic of 'human risk', or "the risk of people doing things they shouldn't or not doing things they should", and examines how behavioural science can help us mitigate it. It also looks at 'human reward', or "how to get the most out of people". When we manage human risk, we often stifle human reward. Equally, when we unleash human reward, we often inadvertently increase human risk.
To pitch guests please email guest@humanriskpodcast.com
All content for The Human Risk Podcast is the property of Human Risk and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
People are often described as the largest asset in most organisations. They are also the biggest single cause of risk. This podcast explores the topic of 'human risk', or "the risk of people doing things they shouldn't or not doing things they should", and examines how behavioural science can help us mitigate it. It also looks at 'human reward', or "how to get the most out of people". When we manage human risk, we often stifle human reward. Equally, when we unleash human reward, we often inadvertently increase human risk.
To pitch guests please email guest@humanriskpodcast.com
What if the biggest AI risk isn’t bias or data, but human behaviour itself? How might AI impact the people using it and what does that mean for how we design solutions and deploy the technology? Episode Summary On this episode, I’m joined by a returning guest. Richard Chataway is a behavioural science expert and strategist who joins me to explore how we can design AI systems that truly work for humans. Richard brings a unique lens to the conversation, combining insights from advertising, government policy, and behavioural science to unpack the human drivers that shape how we build and interact with AI. We discuss everything from cognitive biases and persuasive tech to the ethics of design and how these hidden forces influence our relationship with intelligent machines.
During our conversation, Richard explains the importance of context and behavioural frameworks in making AI more ethical, effective, and human-centric. We explore real-world examples of effective and ineffective design, examining where intentions diverge from outcomes and what can be done to address these discrepancies.
Richard shares fascinating insights from his book "The Behaviour Business" and his experience in both the public and private sectors, offering a practical yet thought-provoking look at what it really means to design for behaviour in the age of AI. Whether you’re an AI sceptic, enthusiast, or simply curious about how technology intersects with human behaviour, this episode offers a compelling exploration of the invisible levers shaping our digital lives. From nudging with intent to avoiding manipulation, Richard helps us understand how behavioural science can make the future of AI more aligned with our values and less prone to unintended consequences.
Guest Biography Richard is a Behavioural Scientist, Author and Podcaster who heads up the Behaviour Change Team at Concentrix, a Fortune 500 global technology and transformation company, working with around 2000 brands globally in over 70 different countries. He is also the founder of Communication Science Group and a former board member of the Association for Business Psychology.
His book The Behaviour Business is a bestselling guide to deploying Behavioural Science within organisations to solve a wide range of problems.
vAI-Generated Timestamped Summary 00:00 – Intro: Designing AI for humans 01:25 – Welcome back Richard Chataway 03:15 – Behavioural science meets AI 05:20 – Why we lie more to bots 07:05 – Judgement, distance & dishonesty 09:10 – When design invites bad behaviour 11:30 – Fraud as a design problem 13:40 – The “Computer says no” effect 15:25 – When neutrality helps disclosure 17:15 – The empathy paradox 19:05 – Data bias & unequal outcomes 21:30 – When to keep humans in the loop 23:40 – Behavioural science as AI insurance 26:00 – When efficiency erodes trust 28:20 – Friction, fairness & feedback 30:05 – AI and the frontline worker 33:00 – Redefining jobs, not removing them 36:10 – New skills for an AI world 39:00 – Beyond efficiency: meaningful work 41:45 – Leadership: ask “should we automate?” 44:10 – Practical design principles 47:30 – The myth of full automation 50:20 – Augment, don’t replace 53:00 – Case studies from Concentrix 56:40 – Making AI ethics actionable 59:20 – The next five years of human-centred AI 1:02:00 – Closing reflections 1:04:30 – Where to find Richard 1:06:00 – Outro & related episodes
People are often described as the largest asset in most organisations. They are also the biggest single cause of risk. This podcast explores the topic of 'human risk', or "the risk of people doing things they shouldn't or not doing things they should", and examines how behavioural science can help us mitigate it. It also looks at 'human reward', or "how to get the most out of people". When we manage human risk, we often stifle human reward. Equally, when we unleash human reward, we often inadvertently increase human risk.
To pitch guests please email guest@humanriskpodcast.com