Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
News
Sports
TV & Film
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/1c/29/41/1c294195-c3ef-e450-e751-53c7bf1638c3/mza_15178932336890013047.jpeg/600x600bb.jpg
Pioneer Park Podcast
Bryan
8 episodes
1 day ago
Interviews, conversations, and commentary on the frontier of science and technology

gilbertgravis.substack.com
Show more...
Technology
RSS
All content for Pioneer Park Podcast is the property of Bryan and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Interviews, conversations, and commentary on the frontier of science and technology

gilbertgravis.substack.com
Show more...
Technology
Episodes (8/8)
Pioneer Park Podcast
How to Spot a Fundable Founder: The Leadership Metrics That Matter | Ft. Logan Yonavjak

In this episode of Pioneer Park, Logan dives deep into what makes for true startup success — beyond pitch decks and funding rounds.

After years of experience as an angel investor, she created the Founder Readiness Institute to unearth the hidden factors of leadership capacity, resilience, and coachability — the human side of building a company.From balancing bold risk-taking with self-awareness, to learning how to handle stress, feedback, and growth, we dive deep into the psychology behind impactful leadership. If you’re an investor or entrepreneur trying to uncover the leadership signals that truly predict founder performance. 

Learn more about the Founder Readiness Institute: founderready.io 

Follow for more conversations on leadership, innovation, and founder growth.

#Entrepreneurship #Leadership #StartupFounders #FounderReadiness #Innovation

Show more...
1 week ago
33 minutes 37 seconds

Pioneer Park Podcast
How SureSteps Empowers Field Teams with Unified Knowledge | Ft. Ziwen Deng

In this episode, we dive into the story behind SureSteps—a platform designed to help field teams work smarter, not harder. What started as a step-by-step troubleshooting tool has grown into a powerful system that makes it easier to capture, share, and transfer knowledge across industries.Ziwen Deng joins us to talk about how SureSteps is transforming the way skilled workers access information in real time, making complex processes simpler and training the next generation of field professionals faster.Whether you’re in the trades, industrial work, or any hands-on profession, SureSteps shows how unified knowledge can reduce errors, boost efficiency, and empower teams on the ground.🔗 Learn more: https://suresteps.io📌 Don’t forget to like, comment, and subscribe for more conversations on the future of work, technology, and innovation.#FieldTeams #UnifiedKnowledge #SureSteps #FutureOfWork #Trades #IndustrialTech #knowledgemanagement

Show more...
1 month ago
42 minutes 24 seconds

Pioneer Park Podcast
The limits of human-derived mathematics with Jesse Michael Han

Pioneer Park interviews Jesse Han, co-founder of Multi AI. Jesse discusses his background, experience at OpenAI, and his philosophy towards research. He draws inspiration from Alexander Grothendieck's philosophy of listening to the universe and arranging theories accordingly. He also talks about the differences between research and startup thinking, the potential for machines to inspire new algorithms, theories, and results in mathematics, and the use of language models and compute to reduce the risk of misaligned outputs. He believes that language models will become as cheap and accessible as microprocessors, and that the value will go to those who build the software and infrastructure to make them accessible to end users. He recommends that those looking to shift their career in the direction of deep learning and generative AI should work hard, find good mentors, and aim for something that will endure.

Transcript

Jesse Han

===

[00:00:00]

Bryan Davis: Welcome to Pioneer Park. My name is Bryan Davis and this is John. Today we're interviewing Jesse Han. Jesse Han is the co-founder of Multi AI, an AI startup based in San Francisco. He holds a PhD in mathematics from the University of Pittsburgh, and previously worked as a research scientist at OpenAI.

I know Jesse through working for Multi, for a few weeks last year, in which I was helping out with some of their product launches. And was thrilled to invite Jesse to talk a little bit more depth about his background, his experience at OpenAI and Multi as it goes forward. Welcome, Jesse.

Jesse Han: Thanks for having me on the podcast, guys. Thrilled to be here.

Alexander Grothendieck

---

John McDonnell: Yeah. So Jesse, like I wanted to start this off by asking about on your personal webpage you have a picture of you looking up like very thoughtfully at this picture of Alexander Grothendieck. So I was curious what that picture means to you.

Jesse Han: What that picture means to me. So I just thought that picture of him was really funny, [00:01:00] Because it's like this shrine. So for context, that hangs in Tong Long University in Vietnam, which was founded by one of the students that he mentored when he visited Vietnam during his career. And this was during a time that Vietnam was being bombed and he wrote some, like very moving recollections about how he would teach at the university.

And then they would all have to dive into an air rate shelter and they would come out and one of the mathematicians had been like, hit by the bombs. And , that student became a very prominent mathematician in Vietnam, and he had such a lasting influence that they made this, the shine and honor of him. There's this nine foot tall portrait of him there. And I just thought it would be funny to kinda like Adam and God .

But the other thing is that I take a lot of inspiration from his philosophy towards research. There's a saying that he has, a saying that he was famous for, which I think is still relevant for people working in startups today or like trying to run a company, which is that so he says that the, so I'm paraphrasing, but the mark of a good researcher is someone who listens very carefully [00:02:00] to the voices of things.

They try to listen to what the universe is trying to tell them about the structure of the universe. And they they arrange their understanding and their theories and what they're doing accordingly. And I think similarly when you are trying to build something, when you're trying to do something new, you have to listen to what the world is telling you.

You have to listen to what the market is telling you and build accordingly.

I hope that was philosophical enough for you, .

John McDonnell: Yeah, I love it. There's this there's this guy David Whyte, who's who's a poet, and he has this kind of concept that he likes to incorporate into poetry that life should be a conversation between you and the world.

And

John McDonnell: like a really meaningful life or a great life is one where that conversation is really effective and goes both ways. And so that really reminds me of that.

Jesse Han: Yeah, totally , it's a reminder to be open to what the world is telling [00:03:00] you. And I think that's really important to remember as you like go heads down and you try to make something happen in a startup. You have to be on the lookout for signals that maybe you should be doing something different.

Maybe you should be pressing something harder. It's a careful balance that you have to strike.

Research vs startup thinking

---

Bryan Davis: Do you find that the signals you're listening to or the incentives present are different in a research context versus startups? And if so, how so?

Jesse Han: To be honest, I don't really think they're that different. Like in research. So especially if you're in like a high pressure environment or if you're working in like a field that's moving really quickly, like AI , like what research looks like is taking a bunch of bets and choosing how to allocate your resources and figuring out Like what kinds of unfair advantages that you have that might make you unusually capable of capitalizing on the outcomes of some of those bets.

And so I think that a lot of. So a lot of the thinking [00:04:00] around what kinds of bets one should take in their career apply equally well to startups and similarly thinking around what kinds of activities are useful for startups to think about, apply equally well to research. So like an example is pursuing like very high impact research events.

Like you could spend a large majority of your career just pursuing incremental advances which carry less risk and are more likely to be published but which don't have an endearing legacy in terms of the research activity of others in the field. Or on the other hand, you can work on something that fundamentally changes the way that people think about some problem inside of the field. And that has a far more scalable... so I think a lot of the same thinking applies.

Jesse's path from research to startups

---

Bryan Davis: And how did you navigate with that perspective? You were previously a researcher, you were a PhD student, you recently finished your PhD, and you've obviously worked in technology prior to launching a startup. [00:05:00] But how did you, what was your own journey from going from the research context into deciding to work in industry?

Did you ever aspire to be a professor?

Jesse Han: I did at some point. So at some point I was very deeply enmeshed in the pure mathematics world. I was trained a logician for most of my undergraduate years. And then I spent my masters just studying mathematical logic and model theory. But I think that gradually shifted towards a more ambitious vision, which formed the basis for the research program which I pursued in my PhD.

Partially due to my realization that I probably didn't have what it was gonna take to become a top mathematics researcher. I simply didn't have the let's say the intellectual horsepower. Because there are a lot of very talented people working in math, and it's a super small field.

So to like really get up there, it's like being a star athlete. It's like you have to train every day, you have to study the work of the masters.[00:06:00] You have to be in the right place at the right time with the right advisor, working on the exact right field to me making that kind of impact.

And towards the beginning of my PhD I came to the realization. The more impactful thing for me to do would be to try to just automate all of mathematics instead. And so I had this grand vision of eventually building some kind of planetary scale system for automatically searching for mathematical theorem improves.

So that one day human mathematicians would just be the operators of such a machine whose details. And intricacies would be hidden from them, like an operating system hides most of its complexities from the end user. And so that was what sort of drew me towards AI and got me into more industry adjacent things because building a system like that requires a lot of engineering skill, requires some pretty compute heavy resource.

And that kind of brought me into the orbit of people trying to apply the [00:07:00] latest techniques and deep learning to automate theorem proving.

Automating mathematics

---

Bryan Davis: To anchor a little bit more in the math world, do you ever think we'll reach a point in mathematics or perhaps are we already there where we're at the limits of the capacity for human brains to comprehend and do you think that there's a zone in mathematics, in pure math where machines will begin to inspire, be the chief creators of new algorithms, new theories, new results?

Jesse Han: Yeah, I think that's a really interesting question. I think the fields where computers have a large advantage is with like really concrete kinds of combinatorics. That's one thing that stands out. Like subfields of discreet mathematics, like places where computation is really the main way to see how phenomena occur. For example, if you are like studying the dynamics of Conway's game of life, [00:08:00] then running computer simulations, or say you're just like studying say cellular automata, then running computer simulations is probably the best way to gain a good understanding of what's going on with any of the phenomena happening there.

But on the other hand, if you're working in more abstract fields that require like a large tower of definitions say algebraic geometry, then. The computer based foundations get a bit more shaky because there are many ways that you can represent various things. And there hasn't been a lot of work on shoring up, commonly accepted foundations.

Does that answer the question?

Bryan Davis: Yeah, I think it does. I remember reading, I believe it was a, or listening to an interview with Richard Fineman several years ago where he was talking about understanding the universe as peeling layers off an onion, and his hypothesis was that, there may never be an end to the layers.

We could just keep peeling and keep peeling, and eventually we might reach a boundary at which our capacity just to [00:09:00] abstract, our capacity to represent what is actually beyond the next layer, is just somehow limited or contained by the limits of, biologically based IQ or biologically based intelligence and I thought that was an interesting concept that I was, I'm curious to, to investigate whether the same thing might apply or whether you think the same thing might apply to mathematics.

Jesse Han: Oh, yeah. There are totally trivial cases, right? Like there, there are like there are prime numbers that require more bits to represent than is representable inside of the human brain?

Like that would be like a really trivial example. But you can already see this happening with the social fabric of mathematics. So what happens now is that a professional mathematician will go really deep and it's like harder and harder to become a true polymath. Like someone who's achieved mastery of like many fields of mathematics.

And so what happens is that the social fabric of like mathematics is made up of these experts who only see a [00:10:00] very narrow slice of the entire picture. Like for example, there's a vanishingly small number of people who have a complete understanding of the classification of finite groups. And that's simply one piece of mathematical lore, which has been written down in relatively, maybe not low fidelity, but written down in questionable fidelity in a constellation of like papers and preprints and surveys. But the true understanding of the proof, the the thing that is communicated from a master to a student in mathematical practice is very hard to grasp.

And it's only owned by a very small number of people today. So that's definitely happening and there are more examples in other fields of mathematics as well. This kind of phenomenon where where, you know it's becoming increasingly unclear. What parts of mathematics stand on firm foundations and what parts don't spurred a lot of research activity over the past few years in formalizing mathematics in a computer understandable way. This [00:11:00] research program was championed by Kevin Buzzard at Imperial College, where he drove a lot of people to organize a lot of mathematics in a computer understandable format, in a theorem proving language called Lean theory.

And he gave a very good talk at Microsoft Research titled The End of Mathematics where he talks about things like this where there are so many parts of math where the highest standard of proof is just social understanding between mathematicians. And when you think about it, these things are on shakier foundations than you might first believe.

John McDonnell: So where does that put us with, so you were saying that your hope was to build an, some kind of AI or automated system that could move the field forward essentially. And I think you mentioned like the foundations are firmer in a place like combinatorics. Do you feel like this is already having an impact? Or what are the kind of milestones to having impact.

Jesse Han: Do you mean the milestones to having impact in terms of in terms of [00:12:00] like fully automating a part of mathematics or just in verifying the existing knowledge?

John McDonnell: Maybe your perspective on both of those. How far is your vision from being realized even in a small way and what would it take to get there?

Jesse Han: I don't think a system like this has really been constructed for any particular field of mathematics. Of course mathematics is vast and there are many talented people working in it. Like many of whom have, are, have been schooled in the ways of formal proof. And so a system like this might have been built, but as far as I know, like nobody's built like this automated proof search thing where ... So the thing that, that I would like to be automated there is like how mathematics research is conducted by a very senior researcher, right?

Like they, they have this deep, deep understanding of the field and like what things are provable, what things should be proved, what kind of like [00:13:00] research programs should be carried out. It's kinda like building of like giant building, right? Like you say oh you can add an arch there if you like, use these tools from over here and because you have five years of experience already, it should only take you three months.

And so that's like something which requires really intense focus, incredible amounts of persistence, superhuman willpower at times. And if computers were able to do that, and we

Show more...
2 years ago
45 minutes 8 seconds

Pioneer Park Podcast
AI avatars and creator alignment, with Avi Fein

Avi Fein, founder of Meebo, discusses how AI can be used to extend people's capabilities rather than replace them. He explains the differences between Meebo and ChatGPT, and how YouTube's success is due to its product definition and monetization engine. He also talks about the importance of trusting individuals rather than brands when it comes to moderating the internet, and the road to monetization. A great and wide-reaching conversation.

Transcript

John McDonnell: Okay, so we have with us today, Avi Fein. Avi is the founder of Meebo, which is a platform for building personalized chatbots. Prior to that, he was a member at South Park Commons, and previously worked at Neeva and YouTube and Google. Avi, welcome to Pioneer Park.

Avi Fein: Thank you. Great to be here.

Bryan Davis: Yeah. Welcome. Good to see you. So we've been having some conversations prior to this, which I think at some point we all realized oh, we should probably turn on the microphones just so we can begin to capture some of this. And I think we were just on the con topic of talking about how to master chat and really like [00:01:00] some of the challenges of chat.

So first, can you just tell us a little bit about Meebo?

Avi Fein: Sure. So Meebo is a platform where we build chat bots out of creators of various topics. We look for people who are usually like experts in a certain thing and, really have proactively shared their knowledge. And then on the other side, there's people who trust them and want to connect with them to get almost like one-on-one advice for recommendations for, questions that they may have.

Where so much we're going to. I would say Instagram and TikTok and YouTube nowadays as being the place that we want to get knowledge out of and we want to get information from. But those are still static and a distant in many ways where, they're not relatable to you. They, can answer, really connect with things that you're interested in, and we want to break down those barriers and really use chat as an interface to make it interactable such that you can have conversation and go into the depths of both you and how it connects to that person and their knowledge and their content as well.

Bryan Davis: Cool. So I guess something that's really top of mind, I think for a lot of people right now is ChatGPT differentiate. Tell us [00:02:00] how Meebo is different than just run of the mill vanilla ChatGPT

Avi Fein: yeah. It's interesting cuz we started working on this before ChatGPT even came out, but I Very hipster of you.

Yeah. But I would say the foundational ideas and principles actually cut across. Even the post ChatGPT world and that one, what we wanted to do was break apart knowledge to not have it be a monolith anymore. And if you looked at what a lot of people experienced with the web and the internet today through products like Google and now ChatGPT it's relatively generic.

You get the same answer independent of who you are. Like if you do a Google search, if you do a ChatGPT like Q and A, we're all gonna get the same thing back. And our belief is that it's a much more. Delightful. And not only that, but like trustful experience when you can blow that up and go into the distribution of different perspectives and different niches of knowledge where someone's gonna have a slightly different take than, person A versus person B on a whole slew of things.

And so for us it's how do you take, [00:03:00] some of the technology that ChatGPT is good, but apply it to the diversity of human perspectives and knowledge in many ways. I think the second part that we build on, That's beyond ChatGPT is playing with the idea of how do you use the technology to extend people versus replace them.

And a lot of what people talk about in AI now is like these virtual assistants, which are just. SNTs of like humans where it's oh yeah, we've trained on a million of you and now this can do what all million of you can do. Like you should just use this one like AI bot. And that's true for art.

It's true for now chatGPT and knowledge of like why would you talk to anyone else about ChatGPT it knows the entire internet? And I think what goes unsaid in those things is that when you do that, you lose the integrity of in the nuance of all those individual people and all the individual relationships and the trust even that you may have in that.

And it becomes not to go back to the same [00:04:00] idea, but there's like monolith of just, the average across everything. And what we wanna lean into is, The individual. And it is like the personal, and it is the idea that we are all unique in our way and like how can it, how can AI extend us to give us superpowers versus just act as like a replacement of us all?

John McDonnell: So when you talk about that uniqueness, certainly from a in the first comment you were saying, oh unlike ChatGPT, we wanna be really personalized. How are you able to achieve that?

Avi Fein: I think it starts with people, and I like, we said before of what Meebo is like our building block, our atomic unit, was an individual creator, was as someone on YouTube. And really it's actually cutting crosses. It's like the person has, they're represented on YouTube, TikTok, Instagram, and even their website like that is like your identity. And so we started with the identity as being the atomic unit and then build up from that and the, Philosophy behind that was that you can capture their unique [00:05:00] perspective and their, you're their unique point of view, and then make that accessible and shareable with the world.

And by doing that, you can maintain this like boundary so that it's no longer like the aggregation of them, plus 10 others who are like them. And then you actually lose texture and you lose the nuances of like their experience of the world. And you also then from the other side as a user, Know who you're talking with and you can have a trusted relationship versus being like, you have to take this leap of faith with ChatGPT, though what they're saying to you is the authoritative truth of the internet, and you're like, we're in a post truth world.

What is the truth of the internet?

Bryan Davis: Yeah. It brings to attention some of the interesting issues. A lot of the complaints about ChatGPT and related products have been that they hallucinate that the things that they spout so confidently aren't facts. Which I think has been a warning sign for a lot of people.

But it is also true that the perspective of an individual creator is also not necessarily a fact. So I'm curious to hear your perspective on two angles. One is the ability to take a creator's perspective and actually [00:06:00] represent that faithfully. Like how do you ground your technology in the actual perspective of creators and how do you feel about creators being obligated to be truthful?

For instance, fake news. Like what's the, what are the risks or maybe down the line for how Meebo could be a voice of people who you don't necessarily want to expand their voice.

Avi Fein: Yeah.

I'll go in reverse order because I think the first question is almost like the harder question than the second one, at least for us. On the second one, and this connects with the idea of like, how do you not think of the world as being a generic monolith of information that we all trust in that we're not trying to give you an opinion around what is fact and what is not fact in the world.

By virtue of talking with an individual, you are establishing that you trust them, or at least like you like, like they're their source of knowledge and they're the source of information, not us. And having worked in this space before, at least at like Neeva and seeing some of these [00:07:00] dynamics and that one of the drawbacks of the, of those types of products is that trust is inferred into the brand such that I trust the first result on Google because Google said it's the first result.

And the actual sourcing of the individual things that like go into it, start to fade away and not matter anymore. And then Google becomes responsible for moderating the internet. And then Twitter becomes responsible for moderating Twitter. And then Instagram becomes responsible for moderating those things because trust flows up into the brand versus like staying down into the individual.

And they all say oh, that's what we, we don't want to have this responsibility, but they design the products and they build the products to, because they like, become the aggregation point and become the, they come the center point to do that. And for us I think it is about not meddling too much into those worlds and letting the individual points of view, the individual facts still sit where they like lay. Like we're not gonna strip it out like of someone's chatbot just because we may disagree with it. Because you on the other side are an [00:08:00] adult and like we trust that you will be able to form your own point of view on whether you can have that trust with that person or not. And that's the complexity of life and I think the reality of it.

On the first part of how do you actually do a good job of this? That's like the long arc of technology and I don't want to claim that like after three or four months we've solved some massive problem and be like, ah, guys, like this is done.

Like we we've done it here. What I can say that we lean into and we think give us tailwinds to be able to tackle this number one is that we come from a background in search. And what that means is that we spend a lot more time and energy and effort into. Retrieval as being like an important problem and understanding what are the facts or what are the opinions, or what are the things that this person has said and how do we like, make sure we're relying on that.

And what that does is it gives you a boundary in terms of the AI and what it like when you are generating a response or when you are trying to [00:09:00] Yeah, like generate like leverage that you can know apriori how far you may be deviating from what that person has said before. So imagine like when it comes in, you have now GPT3 where you give it a prompt, and when you're assembling that prompt, you could say this person has Neevaer said anything about this topic before. Or like they've said something, 20% about this topic, 80% about this topic, 70% about this topic.

You can at least now have like thresholds to say they have not talked enough about this to really give a response and just be like, I'm sorry I can't answer your question. For example, and that's like a very simplistic I think approach to it. But in concept, I think once you have a source domain that is like boundary incense on it you can apply some of these techniques to.

Not let the model hallucinate or at least know when you're hallucinating more.

John McDonnell: I do wanna double click on something here around these recommendations because you worked previously at Google, YouTube, and Neeva. And one thing that I've always found so these platforms do end up becoming editorial and I've actually always found that YouTube seems to have the [00:10:00] most wholesome recommendations relative to other platforms.

I find that like when I get recommended things on YouTube, they're often educational or interesting and relevant to my interests, but not in a perverse way. Whereas, say TikTok is clearly just trying to addict me to their platform which is fun, but doesn't necessarily feel as wholesome.

Why is that?

Bryan Davis: Or perhaps how could that differentiated experience be created?

Avi Fein: Yeah, no I love this question and I love YouTube And I feel the same way. I think that's like definitely a very true observation with insight into it. If I had to speculate on the potential reasons for it, I would guess it relates to both the product definition of long form video itself.

And then also monetization and how those relate to why this, like maybe manifestation of it. One is that I think more nutritious edu educational content is hard to make bite size. And it is better in like a long form format. And I would also say, I would also guess intuitively that the people who want that type of [00:11:00] content don't want it to be like overly reductive and to be like this like hot take TikTok type of thing, where it's no, they're interested in going deep and actually like learning about this thing which is not well suited to those short form bite size types of.

Like platforms. And so I think there's just a natural product definition that causes more of those things to flow into a YouTube or it's a better fit for both the audience who like want to engage with it. And for the content itself in many ways. I think the second reason for monetization is that outside of YouTube, it's actually very hard to make a living generating content on TikTok or Instagram. They're just not the same type of monetization engine for creators as like YouTube is. That's largely related to both. YouTube's an amazing monetization engine of like they can Make billions and billions of dollars of advertising. But they also share all of that with these creators.

And if you take the power of Google's advertising machine and then you [00:12:00] give 50% of that, not exactly, but roughly 50% of that out to creators. You're sharing a lot of wealth and they have, they show more wealth with creators than any other platform by far. What that also means is, Some of these things which are less popular and less like mainstream, like educational informational things are, can survive and make a living on YouTube where they really can On TikTok or Instagram, if you are like an info entertainer, nutritious content creator you really aren't gonna make a living on TikTok or Instagram.

You'll probably do it for a few months and then realize how hard it is and how much it's just like running up Mount Everest effectively and probably burn out and not do it, and fade away. We're on YouTube. You can find your niche and you can find that audience because of the platform, be, and then be able to make money that comes back to you on it to reinvest and actually have a content business that like comes out of it.

Good.

Bryan Davis: What a, what allows for YouTube to be that platform in a way that Instagram is not. Both of them have enormously high user counts. I would imagine that the kind of number of crevices and interests that one can [00:13:00] fall into are just as deep on all of these platforms. What do you think makes YouTube distinct?

Avi Fein: In terms of its monetization, you mean?

Bryan Davis: Or if it's, I guess its ability. Yeah. One thing, it sounds like one of the biggest differentiators is the ability to make

Show more...
2 years ago
47 minutes 31 seconds

Pioneer Park Podcast
Getting kicked out of the SJSU Food Court with Peter and Chris

Peter Lowe and Chris Hockenbrocht discuss their startup Fresh Bot, a food automation platform that uses robotics and machine learning to reduce labor costs and make food more affordable. They discuss the importance of "jedi mind tricks" when launching a business, the trend of unhealthy food in America, the potential of automation in the food service industry, the challenges of automation, the difficulty of hardware startups in Silicon Valley, the potential of automated delivery, the idea of a burrito cannon, the technical risks of building a restaurant automation platform, the importance of owning the experience, their own diets, the idea of eating what our ancestors ate, the Amish and their cautious approach to new technology, the limitations of reductionism when it comes to food and nutrition, and their shared values and goals.

-

Chris and Peter

===

[00:00:00] hi, I'm Bryan and I'm John. And we are hosting the Pioneer Park Podcast where we bring you in-depth conversations with some of the most innovative and forward-thinking creators, technologists, and intellectuals. We're here to share our passion for exploring the cutting edge of creativity and technology. And we're excited to bring you along on the journey. Tune in for thought-provoking conversations with some of the brightest minds of Silicon Valley and beyond.

John: Welcome to Pioneer Park. Today we're shooting live from South Park Commons. Our guest today are Peter Lowe and Chris Hockenbrocht. And Peter is an expert in hardware and product. Chris is an expert in machine learning and cryptography.

They're both members here at South, Park Commons, and they're building a new startup called Fresh Bot. Peter and Chris, welcome to the show.

Chris: Hey, thanks John.

Peter: Welcome. Thank you. Glad to have you. How are y'all doing today?

Bryan: Good with a little bit of setup for our first live feed. You know, both of you were here for some of that, so we're working out the kinks of getting [00:01:00] on microphones and getting videos set up.

So, uh, you know, first time's a charm or maybe the third time's a charm. We'll find out.

Chris: Yeah.

John: Yep. All right. So we're super excited about the work you guys are doing and it entails both robotics and food. So, do you wanna tell us a little bit about what you're working on?

Chris: Yeah one of the things that we really see as a trend is food costs rising.

And so one of the questions is how can you even reduce that? And the way we see tackling that is through automated, front end in food service. So I wouldn't call it robotics, but a lot of different automation techniques that can be applied to different sorts of food preparations that hopefully can reduce the cost of labor going into the food.

And hey if we can solve that, then we can start to think about bringing down food prices.

Bryan: Interesting. Yeah, I just read recently that there's a suspicion that there's some collusion in the egg industry that is causing the massive rise of if egg prices that we've experienced the past couple years, [00:02:00] but obviously that's further up the production pipeline than what y'all are doing.

So concretely, what is Fresh Bot?

Chris: Right now? Currently we're looking into a variety of different products for food automation. We have a MVP on smoothie automation and other drinks. So there's a lot of different PE components that we put into a machine and it allows us to dispense liquid solids, do blending.

And so we could conceivably put a lot of different things. One of the things I really like about this is that it's customizable. So you take individual machine and we can stock it with different things and we can tailor actually to the particular market. But we can do liquid solids powder dispensing. We can recombine these into any sort of drink that you might want,

Peter: One reason yeah, starting with this kind of drinks platform and starting with smoothies, which are one of the hardest drinks to make is interesting. Like for reference Starbucks and Dutch Bros, like about 75% of their drink sales are their cold beverages at this point.

They're, cold brew coffee, [00:03:00] frappuccino you know, juice drinks and stuff. So, I all of that is gonna be very easy to automate with the platform that we're making. Just for a little bit of market orientation reference there. Gotcha.

Bryan: And I recall, so I think several of us have had the pleasure of being part of some test exercises with Fresh Bot, and it wasn't exactly Fresh Bot, but it was Peter testing your smoothie recipes here at South Park Commons. And at the time, I believe you just sort of brought in raw ingredients and you were just mixing on the spot and sort of having a few different offerings.

And I guess that was just sort of a, a menu taste, a menu testing. Is that right?

Peter: Yeah, yeah. I mean I think this kind of comes from having gone to the Stanford D school and taking on this product mindset which is, has been a sort of useful mindset and tool set, it's very difficult.

Hardware is so complicated and so difficult to make that your engineering instinct is that you want to start building something immediately. But that's not necessarily the fastest way to get the answer to the questions that you have, right. About a startup addressing whatever your key risks are.

And with a lot of the prototyping that we've done, it's actually. Not necessarily involved a soldering [00:04:00] iron at the first blush. Right. You know, one of the key questions was like, are people interested in food and the venues that we're interested in, do they want food or what, maybe which items resonate more with people, you know, did they want the sugary thing or the healthier thing? You know, getting some of this broad thick data, from users about like how they think about food, what they like.

Bryan: I love you've shared with me over the past month or so, some of the stories from the front lines of your testing. I think some of them are really fascinating. How many places have y'all been kicked out of so far?

Chris: Well, I mean, as far as I recall there's been two. We went to a mall, it was a security guard. He came up and said, you just can't be doing this here. Right.

Bryan: I guess we should, uh, we should give people the setup. Mm-hmm. So what are you doing when you go to test these on site?

Chris: So yeah the machine was taken to a mall. It wasn't actually a fully functioning prototype. What we were trying to do is gauge interaction. Would people simply walk up to the machine, interact, attempt and order

Peter: mm-hmm. . And this was sort of not a machine, it was really sort of a fridge with a sticker on it. Yeah. It looked like pre-engineering. Yes.[00:05:00]

Chris: Yeah. And security wasn't very happy about that. But you know, the only regret I think we have is not walking out in handcuffs, hey, it would've made for a great great PR stunt there. The other time was we were more recently at San Jose State University. We went right into their food court and we successfully got about two hours of sales done. Students were coming up, people were enjoying it, and then over time, one person would come up, they would go talk to their manager, go talk to this person.

And eventually the building manager came who was in charge of all the food court. And he said, you just can't be here doing this. Like, you know, essentially people pay to come in. Like the restaurants that are there paying, you can't just come in. And Peter here was doing a really great job of deflecting them. You know, just, just, uh, it's really great if you change somebody's focus they start thinking about things in a whole different light. Like, if they're like, what are you doing here? Well, we're making healthy smoothies for people and then, you know, we really [00:06:00] care about people's health. You know, you end up in this place where, now they're like, pitting two goods against each other.

Either I'm doing my job or I'm like supporting Healthy Smoothies. It's this cognitive dissonance that they have to resolve. And so it wasn't until we got to like a really serious manager who just came and told us that we had to leave that, uh mm-hmm.

Peter: Yeah. Just to be clear too, I I have the, food safety handler Safe Serve certification. We're not breaking any food safety rules with any of this stuff. We do take health and you know, proper process seriously. Yeah. So yeah, we just can't pay the rent. Right. Yeah. Early testing phase.

John: Jedi mind tricks are crucial to, to launching this kind of business.

Peter: Yeah. I mean, I suppose really any startup, I guess there's a good reason why, you know, YC asks essentially, what's the biggest sort of non code, hack you've ever pulled off. Right? So there's a lot of hacks necessary sometimes. Yeah. Yeah. Absolutely.

Bryan: yeah. So I'm curious to connect this back to the larger theme of health and access to [00:07:00] healthy food in America, and whether or not your efforts in this area are based in some sort of critique or analysis of what's happening in that space.

Chris: Well, there's certainly a long running trend of food towards less healthy things, and there's probably a few different components playing into this. One is just taste preference, right? Less healthy food tastes better. People like sugar. Sugar. When it sits on the tongue, it is just, hmm, that's good.

And it's hard to avoid. And so the products that you end up seeing at the supermarket, it CPG that is, or the products that you're getting from any sort of restaurant might be laced with additional sugars or additional fats. Things that just really, make it taste good. And so it's hard to satisfy the desire for healthy and balance that with taste.

Another factor is, there's this industrial farming situation where we have a bunch of subsidies that go towards different sorts of crops and being subsidized now and being produced in mass. [00:08:00] Well, why don't we just shove it into everything? Soy is a really key component. Corn is a really key component, and corn doesn't just satisfy food things with high fructose corn syrup.

But you, there's a lot of non-food uses of corn that are really subsidized. Ethanol production is a big thing that goes into gasoline, and so there's, I would say there's a little bit of a incentives, misalignment for actual food to be healthy. It's, and the question I think still remains like, will people really accept healthy food on mass?

Which. We're trying to get to, one of the things that we're working on is perfecting recipes that actually taste good, which still have a good balance of health. Just not putting additional sugars into things. You know, no high fructose corn syrup. No other weird additives, preservatives.

We want to focus on just really presenting the base foods in the healthiest manner that we can.

Peter: Yeah. One [00:09:00] a thing that's interesting to think about is, you know, Chris mentioning that our palates are naturally warped to want, salt, fat and sugar ratios that, are not healthy for us, and there's the perverse incentive because those are very cheap ingredients, for third parties that are preparing our food to just load up on that. Cuz that's a cheap way to, to bulk up the product and it tastes good. But we also know that it's not healthy, which is why we have this, kind of mental mindset that we go home to eat something healthy, right?

You make something that has the amounts of that that, you know, that would be good for you. Maybe it doesn't taste, quite as rich but that is, what we know that we're supposed to do generally, at least, depending on what group you're part of. It's a gradually growing awareness of that. I believe it was the the National Institute of Health recently published a report, but basically we need to eat, more fruits and vegetables is essentially their largest prescription to move the standard American diet toward healthier direction, reduce the amount of diabetes and obesity and heart disease.

But one thing that's interesting is, that we've thought about relative to this a bit is working with like lower quality ingredients, is a way that you can save cost. And, having less operations happen in a restaurant particularly something like a large fast [00:10:00] food chain, which is kind of relying on the very lowest paid, tier of labor right? Is also a you save cost, but with kind of doing less of the preparation fresh, because you honestly can't run a chain that has tens of thousands of locations with people who aren't paid enough to care.

Right? You need to take the control away from them, move things upstream. Things are now less fresh. Maybe you're working with lower quality ingredients to save costs too. And so the food doesn't taste good naturally, right? And so it needs to have all kinds of alchemy applied to it. Particularly adding a lot of salt, fat and sugar to make it taste palatable.

Which is kind of how we got our our sort of fast food situation that we have now. But one thing that is sort of interesting about how maybe technology could be applied here to fix this is if you can engineer preparation techniques using automation, that will prepare the food the right way, every time you can trust them.

I mean, they will work as well as they're engineered too. You can then push more of the food preparation out to, the point of consumption. Things are now being prepared more fresh. They taste better because of the very significant cost savings this can offer on labor and other parts of the overhead costs, [00:11:00] right?

You can still be extremely profitable, but use a higher grade of of ingredients, so that it tastes better. And so that can be, a fairly large lever just to essentially make good food taste better. I was recently running a test in central Pennsylvania just at a, essentially a cafe and making different smoothie recipes and things for the owner and trying them out on customers.

And, know, the owner just couldn't believe that these things were not made with added sugar. And so he'd asked for the recipes cuz he, essentially didn't believe it could be done. That you could make things that taste this good, that don't have the fillers and junk that a number of things are reliant upon nowadays.

Yeah.

John: Yeah. Just playing back. So what you're saying is if you use low quality ingredients and you have low skill staff, you essentially make up for those things by filling your food with a bunch of salt, sugar and oil. And it doesn't have to be that way. Yes. Okay.

Chris: Yes.

Bryan: So I guess one of the things that's relatively alive for me ri

Show more...
2 years ago
48 minutes 19 seconds

Pioneer Park Podcast
Unstructured play and personal tutors with Cinjon Resnick
2 years ago
53 minutes 16 seconds

Pioneer Park Podcast
The frontiers of clinical AI with Vivek Natarajan
2 years ago
47 minutes 34 seconds

Pioneer Park Podcast
Alignment, risks, and ethics in AI communities with Sonia Joseph

Check out our interview with Sonia Joseph, a member of South Park Commons and researcher at Mila, Quebec's preeminent AI research community.

Topics:

- India's Joan of Arc, Rani of Jhansi [[wiki](https://en.wikipedia.org/wiki/Rani_of...)] - Toxic Culture in AI - The Bay Area cultural bubble - Why Montreal is a great place for AI research - Why we need more AI research institutes - How doomerism and ethics come into conflict - The use and abuse of rationality - Neural foundations of ML

Links:

Mila: https://mila.quebec/en/ Follow Sonia on Twitter here: https://twitter.com/soniajoseph_ Follow your hosts: John: https://twitter.com/johnvmcdonnell Bryan: https://twitter.com/GilbertGravis And read their work:

Interview Transcript

Hi, I'm Bryan and I'm John. And we are hosting the Pioneer Park Podcast where we bring you in-depth conversations with some of the most innovative and forward-thinking creators, technologists, and intellectuals. We're here to share our passion for exploring the cutting edge of creativity and technology. And we're excited to bring you along on the journey. Tune in for thought-provoking conversations with some of the brightest minds of Silicon Valley and beyond.

John: okay, so today I'm super excited to invite Sonia onto the podcast. Sonia is an AI researcher at Mila Quebec AI Institute and co-founder of Alexandria, a Frontier Tech Publishing house. She's also a member of South Park Commons where she co-chaired a forum on agi, which just wrapped up in December.

We're looking forward to the public release of the curriculum later this year. So keep an eye out for that. Sonia, welcome to the.

Sonia: Hi John. Thanks so much for having me. [00:01:00] It's a pleasure to be here.

Bryan: Yeah, welcome.

Sonia: Hi, Bryan.

Bryan: Yeah, so I guess for full transparency, John and I were both attendees of this AGI forum.

And I was waiting every week's I guess session with baited breath. I thought that the discussions in the forum were super interesting. There was a bunch of really prominent, interesting guests that we had come through. And yeah, it was really interesting some intersection of like practical questions with sci.

And a lot of things that are like used to be sci-fi that are getting far more practical than perhaps we ever anticipated.

John: All right. So Sonia, I feel like the question that's on everyone's mind is, Who is Rahni of Jansi ?

Sonia: Oh my gosh. Yeah. Yeah. So basically like I grew up on a lot of like Indian literature and Indian myth.

And she's considered to be India's Jonah Arc. So female leader like has a place in feminist scholarship if you look at any literature. And I [00:02:00] believe she read. Of India against the British. I actually wanna fact check that .

John: Yeah, no, that's really cool. Just we love the the recent kind of blog post that you worked on with S and you pointed out how these kind of influences like really enabled you to succeed at your current endeavors.

So we're like, just curious about maybe like how your background. Made you who you are. .

Sonia: Yeah. Yeah. No, I appreciate that question a lot. So like I, I would say I had a kinda culturally schizophrenic background in some ways where I spent a lot of time. When I was a child in India but then the other half of my life was in Massachusetts.

Which was very like a lot of Protestantism and growing up on a lot of like American history. I like I saw things in a calculation of various like cultures and religions and that has like very much impacted like my entry into AI and how I'm conceiving of ai.

John: Yeah. Something that we loved about the AGI forum is that you have this [00:03:00] kind of really critical eye towards the culture of the way that AI is practiced and the way that research is going forward.

And we can I think you really brought this kind of unique perspective that was super valuable.

Bryan: Yeah, I'm curious do you, are there any points at which you think there's like current I guess problems either in the way that research is being done or the kind of I guess the moral framework in which that research is being done?

Sonia: It's a really interesting question. I would say the AI world is like very big first of all, so it's like hard to critique the entire thing. But it. Have it, parts of it have some of the problems that physics had in the 1990s or still has in being male dominated or like focused on like certain cultures.

And the culture will generate a certain type of research. So your scientific conclusions and the community or culture you're in, you have this like reciprocal relat. For example, in like the 1990s like there's this amazing book called The Trouble with Physics, with Lee [00:04:00] Smolin that goes into sort of like the anthropology of the physics community.

And the 1990s, the physics community was deeply obsessed with string theory. If you weren't working on string theory, you just weren't cool at all and you probably weren't gonna get tenure track. Goes into how string theory wasn't empirically proven. It was like mathematically, internally consistent, but it was by no means like a theory of everything.

And how the monoculture of physics and like the intellectual conclusion of St string theory would feed off each other in this like that cycle. Lee Smolin basically created his own institute to deal with this problem cuz he got just like very frustrated.

I don't think AI is quite so bad. But there are pockets of AI that I do notice. Similar dynamics. And particular the parts of AI that were previously like more influenced by effective altruism and LessWrong in this like the AI safety and alignment camp. I don't think these fields have as bad a problem anymore.

There have been recent. Attempts [00:05:00] called the reform attempt that Scott Aaronson had a very great blog post on how AI safety is being . There's an attempt for AI safety, like a legitimate science that's like empirically grounded and has mathematical theory. But I did notice that more classical AI safety definitely had these like 1990s style string theory problems, , both in the science being like not empirically verified, but like dogmatic. And also in the community that was generating it not being fairly healthy. And I guess with the caveat, I'll say have been either adjacent to or in these communities since I was basically like 12.

So I have seen like a very long history. And I also don't mean to like unilaterally critique these communities. think they have done a lot of good work and given a lot of contributions to the field both in terms of frameworks talent funding but a am looking at these communities with a critical eye, like as we move forward.

Cause it's like, what is, what are coming. Both as a scientific paradigm and as the research community that like generates that paradigm.

Bryan: I'm curious. To me there seemed like kind of two issues. I don't know if they're orthogonal but I think like the scientific integrity of a community and the ability for that community to [00:07:00] generate and falsify hypotheses and the culture of that community and whether or not that culture is a healthy culture to be in, whether it's like a nice place to work in and all that sort of stuff. And I guess my hypothesis is like none of us wanna work in a shitty culture and none of us wanna be part of communities where insults or like abusive behavior is tolerated at all.

But I think that a lot of scientific communities can be interpreted as quite dogmatic because there's an insistence on a specific sort of intellectual lens that you need to adapt to participate in the discussion. And for me it's it always seems like there's like a balance there.

Because for instance, if you wanna be a biologist, you better accept evolution. And like you, you're you have to meet that criteria. And I'm curious, do you think that, for instance in the, is there some sort of almost Intellectual cowtowing or basically a tip of the hat that one needs to do when you're studying artificial intelligence to make it into the room to be taken seriously.

Sonia: That's a great question. Yeah and evolution is an interesting example. Cause that's one that has been empiric. [00:08:00] Verified in various places and maybe the exact like structure o of evolution is open to debate. Like we dunno if it's more gradual or happens in leap burst.

But example in some AI communities is of accepting that on oncoming AI is gonna be bad. Or like culture or more apocalyptic culture. And this is prevalent in a lot of AI safety communities where in order to. Get your research like taken seriously or to even be viewed as an ethical person.

It becomes about character. You have to view AI as inevitable. It's coming fast, and it's more likely than not to be incredibly disastrous. And to be clear, I think ai, like we should be thinking about the safety behind incoming technologies. That's obvious and good.

If AI ends the world. That would be terrible. And even if there's a very small percentage that could happen, we should like to make sure it doesn't happen. But I do think that some of these communities like overweight that and make it almost part of the sort of dogma when it's not empirically proven that this is gonna happen.

We have no evidence this is going to happen. It's like a priority argument [00:09:00] that's actually like mimicking a lot of. Student stay cults and also like death cults that have been seen throughout history. And it's absolutely fascinating that much to less now than it was before.

A lot of possibly AI safety has become like modern alignment. Or practiced in more professional spheres where I think views are a lot more, more nuanced and balanced. But there is still a shadow of Bostrom and Yudkowski and these original thinkers who were influential, e even more influential like 10 to 15 years ago.

John: Sonia sometimes when I talk to people who are really into the alignment problem there's a kind of view that like the philosophical argument that is made is just like very strong.

And so They just, people just view it as actually just like a very strong argument that this, these systems are very dangerous. If you think about when I think about Holden Karnofsky's pasta, like I, I of imagine okay, I think if the system was that powerful, it seems like it would be dangerous.

I don't know exactly how likely I think that exact version of it is to be [00:10:00] created. When you think about those, when you think about the, I guess like the content of that alignment argument? Do you think the content did you just think it's do you think the argument is is strong or do you feel like it's actually overrated?

I guess what's your view on that. .

Sonia: Yeah. Re remind me of the so my memory of pasta is that there's some like math and AI that starts like, executing on experiments or like using the results of the experiments, like feedback.

John: That's right. Yeah.

Sonia: Yeah. Yeah. This is fascinating. I love pasta.

I, I think it's absolutely fascinating as a thought experiment. My pushback here would be like all of these scenarios strike me as being slow takeoff. Opposed to, someone develops, like an agent, like a single lab develops an agent, and the agent starts like recursively, self-improving and it like takes over the world, which is like often like the classic scenario presented.

John: Yeah.

Sonia: The reason this doesn't make sense to me is that there are so many like limitations in the physical. for example, just like the speed of molecules in biology. We're gonna be limited by that. The speed of a robot, like [00:11:00] traveling across the country. We're going to be limited by that.

Like we there's one, one argument that computers think so fast. They're not going to be able, they're gonna be able to outthink us. I think this is true, but ultimately for the co computer to interface with the physical world, it is going to be dealing with the slowness of the physical world.

And that is not something the computer can artificially speed up. There are also various other constraints, like the government has a lot of red tape and bureaucracy. In order to actually run any study you have to go through a certain approval process. Maybe the AI figures out how to bypass that.

That's possible. Maybe the AI has a physical army and it doesn't care if that's also possible. But I do think that. , the real world has enough red tape and constraints where we're not gonna wake up one day and see like drones like everywhere. And some like AI has taken over. I think it'll be like slower and more subtle than that.

This is also not to say not necessarily to worry, having some sort of superhuman scientist like that gets out of her control sounds objectively bad, but I don't actually think pas

Show more...
2 years ago
49 minutes 44 seconds

Pioneer Park Podcast
Interviews, conversations, and commentary on the frontier of science and technology

gilbertgravis.substack.com