Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
TV & Film
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts71/v4/ec/3c/ac/ec3cac17-d460-db87-1f08-e7b2ff1a02ff/mza_2395170188661892592.jpg/600x600bb.jpg
The EPAM Continuum Podcast Network
EPAM Continuum
174 episodes
3 weeks ago
When it comes to the topic of drug discovery and development, scientists are busy furrowing their lab-goggled brows trying to understand what’s real and what’s hype when it comes to the power and potential of AI. This *Resonance Test* conversation perfectly dramatizes the situation. In this episode, Emma Eng, VP of Global Data & AI, Development at Novo Nordisk, and scientist and strategist Chris Waller provide a candid view of drug development in the AI era. “We're standing on a revolution,” says Eng, reminding us that “we've done it so many other times” with the birth of the computer and the birth of the internet. It’s prudent, she cautions, not to rush to judgement guided by either zealots or skeptics. Waller says, of the articles about AI and leadership in *Harvard Business Review,* one could do “a search and replace ‘AI’ with any other technological change that's happened in the last 30 years. It's the same kind of trend and processes and characteristics that you need in your leadership to implement the technology appropriately to get the outcomes that you're looking for.” Which means, for pharma, much uncertainty and much experimentation. “I think experimentation is good,” says Eng, who then adds that we need to always keep track of what is it that we're experimenting on. She says that the word “experimentation” can “sound very fluid” but in fact, “It's a very structured process. You set up some very clear objectives and you either prove or don't prove those objectives.” Waller references the various revolutions (throughput screening, combinational chemistry, data, and analytics revolutions) that pharma has seen and says: “We've all held out hope for each and every one of these revolutions that the drug discovery process is going to be shrunk by 50% and cost half as much. And every time we turn around, it's still 12 to 15 years, $1.5 to $2 billion.” Will AI make the big difference, finally? “Maybe we need to be revolutionized as an industry,” she says. “It can be hard to make much of a difference as long as there are few big players.” Just a few big players, she says, is “the nature of pharma.” Of course, our scientists are measured in their assessments about industry change. After all, as Waller says, the systems involved—the human body, the regulatory environment, the commercial ecosystems—are all “super-complicated.” Eng notes that an important side-effect around the AI hype is corporate interest in data. “Now it's much easier to put that topic on the table saying, ‘If you want to do AI, you need to take care of your data and you need to treat it like an asset.’” Listen on as they test topics such as regional and regulatory challenges in AI adoption, change management, and future tech and long-term impact (watch out for quantum, everyone!). In the end, Eng returns to the idea of revolutions. “You think you want so much change in the beginning which you don't get because it takes time,” says Eng. This makes us underestimate what will happen later. Having such a farseeing mindset is significant, she says, because “these technology shifts will have a large impact on the long term.” Host: Alison Kotin Engineer: Kyp Pilalas Producer: Ken Gordon
Show more...
Business
RSS
All content for The EPAM Continuum Podcast Network is the property of EPAM Continuum and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
When it comes to the topic of drug discovery and development, scientists are busy furrowing their lab-goggled brows trying to understand what’s real and what’s hype when it comes to the power and potential of AI. This *Resonance Test* conversation perfectly dramatizes the situation. In this episode, Emma Eng, VP of Global Data & AI, Development at Novo Nordisk, and scientist and strategist Chris Waller provide a candid view of drug development in the AI era. “We're standing on a revolution,” says Eng, reminding us that “we've done it so many other times” with the birth of the computer and the birth of the internet. It’s prudent, she cautions, not to rush to judgement guided by either zealots or skeptics. Waller says, of the articles about AI and leadership in *Harvard Business Review,* one could do “a search and replace ‘AI’ with any other technological change that's happened in the last 30 years. It's the same kind of trend and processes and characteristics that you need in your leadership to implement the technology appropriately to get the outcomes that you're looking for.” Which means, for pharma, much uncertainty and much experimentation. “I think experimentation is good,” says Eng, who then adds that we need to always keep track of what is it that we're experimenting on. She says that the word “experimentation” can “sound very fluid” but in fact, “It's a very structured process. You set up some very clear objectives and you either prove or don't prove those objectives.” Waller references the various revolutions (throughput screening, combinational chemistry, data, and analytics revolutions) that pharma has seen and says: “We've all held out hope for each and every one of these revolutions that the drug discovery process is going to be shrunk by 50% and cost half as much. And every time we turn around, it's still 12 to 15 years, $1.5 to $2 billion.” Will AI make the big difference, finally? “Maybe we need to be revolutionized as an industry,” she says. “It can be hard to make much of a difference as long as there are few big players.” Just a few big players, she says, is “the nature of pharma.” Of course, our scientists are measured in their assessments about industry change. After all, as Waller says, the systems involved—the human body, the regulatory environment, the commercial ecosystems—are all “super-complicated.” Eng notes that an important side-effect around the AI hype is corporate interest in data. “Now it's much easier to put that topic on the table saying, ‘If you want to do AI, you need to take care of your data and you need to treat it like an asset.’” Listen on as they test topics such as regional and regulatory challenges in AI adoption, change management, and future tech and long-term impact (watch out for quantum, everyone!). In the end, Eng returns to the idea of revolutions. “You think you want so much change in the beginning which you don't get because it takes time,” says Eng. This makes us underestimate what will happen later. Having such a farseeing mindset is significant, she says, because “these technology shifts will have a large impact on the long term.” Host: Alison Kotin Engineer: Kyp Pilalas Producer: Ken Gordon
Show more...
Business
https://i1.sndcdn.com/artworks-QL6YUfXKpzCO6zpq-wzVJmw-t3000x3000.jpg
The Resonance Test 90: Responsible AI with David Goodis and Martin Lopatka
The EPAM Continuum Podcast Network
32 minutes 48 seconds
1 year ago
The Resonance Test 90: Responsible AI with David Goodis and Martin Lopatka
Responsible AI isn’t about laying down the law. Creating responsible AI systems and policies is necessarily an iterative, longitudinal endeavor. Doing it right requires constant conversation among people with diverse kinds of expertise, experience, and attitudes. Which is exactly what today’s episode of *The Resonance Test* embodies. We bring to the virtual table David Goodis, Partner at INQ Law, and Martin Lopatka, Managing Principal of AI Consulting at EPAM, and ask them to lay down their cards. Turns out, they are holding insights as sharp as diamonds. This well-balanced pair begins by talking about definitions. Goodis mentions the recent Canadian draft legislation to regulate AI, which asks “What is harm?” because, he says, “What we're trying to do is minimize harm or avoid harm.” The legislation casts harm as physical or psychological harm, damage to a person's property (“Suppose that could include intellectual property,” Goodis says), and any economic loss to a person. This leads Lopatka to wonder whether there should be “a differentiation in the way that we legislate fully autonomous systems that are just part of automated pipelines.” What happens, he wonders, when there is an inherently symbiotic system between AI and humans, where “the design is intended to augment human reasoning or activities in any way”? Goodis is comforted when a human is looped in and isn’t merely saying: “Hey, AI system, go ahead and make that decision about David, can he get the bank loan, yes or no?” This nudges Lopatka to respond: “The inverse is, I would say, true for myself. I feel like putting a human in the loop can often be a way to shunt off responsibility for inherent choices that are made in the way that AI systems are designed.” He wonders if more scrutiny is needed in designing the systems that present results to human decision-makers. We also need to examine how those systems operate, says Goodis, pointing out that while an AI system might not be “really making the decision,” it might be “*steering* that decision or influencing that decision in a way that maybe we're not comfortable with.” This episode will prepare you to think about informed consent (“It's impossible to expect that people have actually even read, let alone *comprehended,* the terms of services that they are supposedly accepting,” says Lopatka), the role of corporate oversight, the need to educate users about risk, and the shared obligation involved in building responsible AI. One fascinating exchange centered on the topic of autonomy, toward which Lopatka suggests that a user might have mixed feelings. “Maybe I will object to one use [of personal data] but not another and subscribe to the value proposition that by allowing an organization to process my data in a particular way, there is an upside for me in terms of things like personalized services or efficiency gains for myself. But I may have a conscientious objection to [other] things.” To which Goodis reasonably asks: “I like your idea, but how do you implement that?” There is no final answer, obviously, but at one point, Goodis suggests a reasonable starting point: “Maybe it is a combination of consent versus ensuring organizations act in an ethical manner.” This is a conversation for everyone to hear. So listen, and join Goodis and Lopatka in this important dialogue. Host: Alison Kotin Engineer: Kyp Pilalas Producer: Ken Gordon
The EPAM Continuum Podcast Network
When it comes to the topic of drug discovery and development, scientists are busy furrowing their lab-goggled brows trying to understand what’s real and what’s hype when it comes to the power and potential of AI. This *Resonance Test* conversation perfectly dramatizes the situation. In this episode, Emma Eng, VP of Global Data & AI, Development at Novo Nordisk, and scientist and strategist Chris Waller provide a candid view of drug development in the AI era. “We're standing on a revolution,” says Eng, reminding us that “we've done it so many other times” with the birth of the computer and the birth of the internet. It’s prudent, she cautions, not to rush to judgement guided by either zealots or skeptics. Waller says, of the articles about AI and leadership in *Harvard Business Review,* one could do “a search and replace ‘AI’ with any other technological change that's happened in the last 30 years. It's the same kind of trend and processes and characteristics that you need in your leadership to implement the technology appropriately to get the outcomes that you're looking for.” Which means, for pharma, much uncertainty and much experimentation. “I think experimentation is good,” says Eng, who then adds that we need to always keep track of what is it that we're experimenting on. She says that the word “experimentation” can “sound very fluid” but in fact, “It's a very structured process. You set up some very clear objectives and you either prove or don't prove those objectives.” Waller references the various revolutions (throughput screening, combinational chemistry, data, and analytics revolutions) that pharma has seen and says: “We've all held out hope for each and every one of these revolutions that the drug discovery process is going to be shrunk by 50% and cost half as much. And every time we turn around, it's still 12 to 15 years, $1.5 to $2 billion.” Will AI make the big difference, finally? “Maybe we need to be revolutionized as an industry,” she says. “It can be hard to make much of a difference as long as there are few big players.” Just a few big players, she says, is “the nature of pharma.” Of course, our scientists are measured in their assessments about industry change. After all, as Waller says, the systems involved—the human body, the regulatory environment, the commercial ecosystems—are all “super-complicated.” Eng notes that an important side-effect around the AI hype is corporate interest in data. “Now it's much easier to put that topic on the table saying, ‘If you want to do AI, you need to take care of your data and you need to treat it like an asset.’” Listen on as they test topics such as regional and regulatory challenges in AI adoption, change management, and future tech and long-term impact (watch out for quantum, everyone!). In the end, Eng returns to the idea of revolutions. “You think you want so much change in the beginning which you don't get because it takes time,” says Eng. This makes us underestimate what will happen later. Having such a farseeing mindset is significant, she says, because “these technology shifts will have a large impact on the long term.” Host: Alison Kotin Engineer: Kyp Pilalas Producer: Ken Gordon