Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts126/v4/ce/f3/46/cef3464e-6703-0b9b-384a-9d39a3c47879/mza_5818024916783655147.jpg/600x600bb.jpg
Artificially Unintelligent
Artificially Unintelligent
52 episodes
6 days ago
Eavesdrop in on chats between Nicolay, William, and their savvy friends about the latest in AI: new architectures, developments, and tools. It's like chilling with your techie friends at the bar downing a few beers.
Show more...
Technology
RSS
All content for Artificially Unintelligent is the property of Artificially Unintelligent and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Eavesdrop in on chats between Nicolay, William, and their savvy friends about the latest in AI: new architectures, developments, and tools. It's like chilling with your techie friends at the bar downing a few beers.
Show more...
Technology
Episodes (20/52)
Artificially Unintelligent
E49 Arxiv Dives: Retrieval Augmented Generation for Knowledge-Intensive NLP Tasks

In this episode of the Artificially Unintelligent podcast, hosts William and Nicolay look into the paper "Retrieval Augmented Generation for Knowledge-intensive NLP Tasks." They discuss the significance of RAG (Retrieval Augmented Generation) in enhancing large language models (LLMs), highlighting its innovative approach to combining linguistic and factual knowledge. The episode covers the shift from parametric to non-parametric memory in AI, examining the challenges and potential solutions for separating these components. They also explore the paper's practical implications, particularly in the context of Meta's research direction and the broader NLP field. Key takeaways include the importance of RAG in reducing hallucinations in LLMs and the future direction of AI research, especially in multimodality. Tune in for an insightful deep dive into the evolving world of AI and NLP.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Show more...
1 year ago
20 minutes 26 seconds

Artificially Unintelligent
E48 Making LLM Training Easier with Hugging Face's TRL

In this episode of the Artificially Unintelligent Podcast, hosts William and Nicolay delve into Hugging Face's Transformer Reinforcement Learning (TRL) library, discussing its impact and potential in AI training. They begin by explaining what Hugging Face is and its significance in the AI community, emphasizing its repository of models and datasets, particularly for large language models. The conversation then shifts to the TRL library, highlighting its ease of use and integration with other Hugging Face libraries, such as PyTorch and TensorFlow. They explore the simplicity of the TRL trainer classes, which make complex training processes more accessible, especially for reinforcement learning by human feedback. The hosts also discuss TRL's applications in natural language processing and ponder its potential in other modalities like images and audio. Wrapping up, they reflect on Hugging Face's business model and its contribution to the open-source AI community. This episode offers valuable insights into the TRL library, making it a must-listen for AI enthusiasts and professionals interested in efficient and effective AI model training.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Show more...
1 year ago
21 minutes 48 seconds

Artificially Unintelligent
E47 A Retrospective on OpenAI's Turbulent Weekend: Leadership, Breakthroughs, and Speculations

In this episode of Artificially Unintelligent, hosts William and Nicolay dive into the dramatic events unfolding at OpenAI. They discuss the whirlwind of changes in leadership, starting with the unexpected firing of Sam Altman and the subsequent shakeup within the company's board of directors. The conversation touches on the speculated reasons behind these moves, including a groundbreaking AI development known as QStar and its safety implications. They explore the nuances of this development, discussing how it might enhance AI's mathematical capabilities and the potential risks involved. The hosts also speculate on how Microsoft, as a major stakeholder in OpenAI, might benefit from these changes. The episode provides an in-depth analysis of the recent turmoil at OpenAI, offering insights into the complex dynamics of leading AI companies and the implications these changes could have on the future of AI development.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Show more...
1 year ago
21 minutes 10 seconds

Artificially Unintelligent
E46 Unpacking FastAPI: Simplifying API Development in Python

In this episode of the Artificially Unintelligent Podcast, William and Nicolay explore the world of FastAPI, a dynamic web framework designed for building APIs with Python. They delve into the essentials of what an API is and discuss FastAPI's rise as a popular tool for backend development, especially for machine learning model deployment. FastAPI's key features, such as its Pythonic nature, ease of use, and automatic documentation generation, are highlighted, along with its integration with other Python libraries like Pydantic and Celery. The hosts also discuss the security measures FastAPI offers, like API key authentication and OAuth, providing insights into how FastAPI ensures secure data transmission. They wrap up by sharing personal experiences of deploying models and databases using FastAPI, making this episode a valuable resource for both beginners and seasoned developers interested in modern web framework capabilities.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Show more...
1 year ago
19 minutes 20 seconds

Artificially Unintelligent
E45 Graphcast - How Google DeepMind is Changing Weather Forecasting

In this episode of the Artificially Unintelligent Podcast, William and Nicolay discuss the groundbreaking work of Google DeepMind on weather forecasting through their recent publication of the Graphcast paper. The paper, published in the esteemed journal Science, showcases an advanced weather prediction tool capable of forecasting with remarkable accuracy up to 10 days in advance. Our hosts delve into the technicalities of Graphcast, exploring its autoregressive modeling, innovative mesh network construction, and the use of JAX for efficient processing. They also ponder the potential societal and economic implications of such accurate long-term weather predictions, touching upon its impact on industries like agriculture, shipping, and logistics. The episode provides a thorough understanding of the complexities and advancements in weather forecasting AI, making it a must-listen for anyone interested in the intersection of technology and natural sciences.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Show more...
1 year ago
20 minutes 15 seconds

Artificially Unintelligent
Extra Gemini Unveiled: Google's Answer to GPT-4

In this extended episode of the Artificially Unintelligent Podcast, William and Nicolay delve into the exciting announcement of Google's new AI model, Gemini. They discuss Gemini's positioning as Google's response to OpenAI's GPT models, touching on its architecture, training data, and performance. The hosts highlight Gemini's unique feature of being natively multimodal, trained with interleaved data across different modalities like text, images, and audio. They speculate on the potential datasets used for training, including YouTube's vast content. The conversation shifts to Google DeepMind's probable contribution to Gemini, focusing on data mixture and training curriculum. William and Nicolay critically analyze Gemini's performance benchmarks, comparing them with existing models like GPT-4. They also ponder over Gemini's integration into Google's vast array of products and its potential impact on the search engine market. The episode provides a comprehensive look at Google's latest AI endeavor, making it a must-listen for AI enthusiasts and professionals keen to understand the evolving landscape of AI models.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Show more...
1 year ago
29 minutes 57 seconds

Artificially Unintelligent
E44 AI’s New Building Blocks: What are Foundation Models?

In this episode of Artificially Unintelligent, hosts William and Nicolay delve into the intricate world of foundation models in AI. They discuss what these models are – large-scale, machine learning models trained on vast, diverse, and often unlabeled data – and their growing importance in various AI fields. From the complexities of handling unlabeled data to the evolution of foundation models across text, image, and audio domains, the conversation covers a range of topics. They examine models like GPT, Dolly, and Whisper, providing insights into their functionalities and impact. Looking ahead, they speculate on the future of AI, imagining a world with domain-specific AGIs and advanced multimodal models. This episode offers a comprehensive look at foundation models, making it a valuable listen for AI professionals and enthusiasts eager to stay abreast of key developments in the field.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Show more...
1 year ago
20 minutes 27 seconds

Artificially Unintelligent
E43 Diving Deep into Code Llama: The AI Coding Specialist

In this episode of the Artificially Unintelligent Podcast, we delve into the fascinating world of Code Llama, a state-of-the-art large language model (LLM) that's reshaping the landscape of coding and development. We start by examining Code Llama's capabilities, which allow it to generate and interpret code from both code and natural language prompts. This versatility makes it an indispensable tool for programmers and developers.

We discuss how Code Llama, based on Llama 2 and further trained on coding-specific datasets, has become an advanced tool for coding tasks. The episode explores the variety of models under Code Llama, including foundational models, Python-specific specializations, and instruction-following models. Each model, with its own parameter count, caters to different needs and complexities in coding applications.

Our conversation also covers the wide application of Code Llama in programming tasks, emphasizing its state-of-the-art performance in infilling, handling large input contexts, and executing zero-shot instructions. The accessibility of Code Llama for both research and commercial use opens up new possibilities for developers and researchers alike.

Additionally, we provide a comprehensive guide on getting started with Llama 2, discussing prerequisites, model acquisition, hosting options, and a range of how-to guides from fine-tuning to validation.

This episode is a treasure trove for anyone interested in the intersection of AI and coding, offering both a deep dive into the technical aspects of Code Llama and practical advice on leveraging this powerful tool in your own projects. Whether you're an AI engineer, developer, or tech enthusiast, join us for an enlightening journey into the world of AI-driven coding with Code Llama.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: ⁠⁠LinkedIn⁠⁠ | ⁠⁠X (formerly known as Twitter)⁠⁠
  • William: ⁠⁠LinkedIn⁠
Show more...
1 year ago
22 minutes 7 seconds

Artificially Unintelligent
E42 A Deep Dive into PyTorch Lightning: Powering Your Next AI Projects

In this insightful episode of the Artificially Unintelligent Podcast, we dive deep into the world of machine learning frameworks, focusing on PyTorch and its high-level wrapper, PyTorch Lightning. We start by exploring the fundamentals of PyTorch, developed by Facebook's AI Research lab, renowned for its flexibility, dynamic computational graph, and strong GPU support. Understanding these features is crucial for any AI engineer looking to leverage PyTorch's powerful capabilities in deep learning projects.

We then shift our attention to PyTorch Lightning, a tool designed to streamline PyTorch code and enhance its readability and maintainability. We discuss key features like reduced boilerplate, scalability, and built-in best practices, which are especially beneficial for researchers and developers focused on model experimentation and algorithm refinement.

Our discussion navigates the contrasts between these two frameworks, examining their design philosophies, handling of complexity and boilerplate code, scalability and performance tuning, community support, and use cases. We shed light on how PyTorch offers in-depth control and customization for complex AI projects, while PyTorch Lightning appeals to those seeking to minimize routine coding tasks without losing the essence of PyTorch's power.

Join us as we dissect these two prominent tools in the AI toolkit, offering valuable insights for both seasoned and aspiring AI engineers. Whether you're considering which framework to use for your next project or just curious about the latest in AI development tools, this episode is a treasure trove of information.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: ⁠LinkedIn⁠ | ⁠X (formerly known as Twitter)⁠
  • William: ⁠LinkedIn
Show more...
1 year ago
20 minutes 36 seconds

Artificially Unintelligent
E41 Meta's AI Ambitions: Unpacking Llama 2, Generative Ads, and Legal Landscapes

In this episode of the Artificially Unintelligent Podcast, we delve into Meta's latest advancements and challenges in the realm of AI. We begin by exploring how Meta is leveraging third-party development to enhance Llama 2 and other AI software for more efficient metaverse technologies. Despite facing a tarnished reputation in Washington, D.C., Meta continues to innovate, notably through the rollout of generative AI tools designed to revolutionize content creation for advertisers. These tools, including the Background Generation tool, Image Expansion tool, and Text Variations tool, promise to enhance creativity and productivity on Meta's platforms.

We also introduce Meta's new conversational assistant, Meta AI, set to be integrated into WhatsApp, Messenger, and Instagram, marking a significant step in AI-assisted communication.

Our discussion extends to the various open-source AI models released by Meta, including SAM, a 'Human-like' AI Image Creation Model, SeamlessM4T for speech translation, and the Llama 2 large language model. We explore their functionalities, applications, and the latest developments like deployment on SageMaker Inf2 instance and integration with Intel AI hardware.

Further, we navigate the intricate landscape of regulatory challenges faced by Meta, from new subscription models in Europe in response to GDPR to potential bans on targeted advertising and heightened scrutiny in the U.S. We analyze how these regulatory shifts impact Meta's strategies and AI deployment.

Lastly, we touch on Meta's metaverse projects, discussing the company's ambitions and the technological advancements fueling these initiatives. From VR/AR applications to real-time decoding of brain activity images, we cover the breadth of Meta's AI-driven journey.

Join us for an insightful exploration of Meta's AI strategies, the challenges it faces, and the broader implications for the tech and AI industries.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Show more...
1 year ago
23 minutes 36 seconds

Artificially Unintelligent
E40 Tailors: Accelerating Sparse Tensor Algebra by Overbooking Buffer Capacity

We discuss a novel approach called Tailors that applies principles of speculation to tiling sparse tensors. By intelligently "overbooking" buffer capacities, Tailors is able to maximize reuse of on-chip memory for sparse workloads. We'll also explore Swiftiles, a statistical technique that estimates optimal tile sizes with minimal cost. Tune in to learn how Tailors delivers up to 52x speedups through truly fearless use of hardware resources.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Show more...
1 year ago
15 minutes 50 seconds

Artificially Unintelligent
E39 Cuda - What is it? How to use it? Why should you learn it?

Our episode dives into the fascinating world of CUDA and GPU computing. We'll explore CUDA's history and significance, the benefits and challenges of CUDA programming, and how it compares to other parallel computing frameworks. Discover how CUDA enables massive acceleration for deep learning, graphics, and more. Tune in to learn why CUDA remains such a powerful tool for developers and researchers. New episodes coming soon, so don't forget to subscribe!

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Show more...
1 year ago
16 minutes 53 seconds

Artificially Unintelligent
E38 NVIDIA's Transformation into an AI Powerhouse

From pioneering GPU computing to becoming a leader in autonomous vehicles and the development of deep learning, join us as we track NVIDIA's evolution and the key strategies that have fueled its rise. We'll discuss its focus on performance over costs, investments in AI research and how it continues expanding into new growth areas to stay ahead of the curve in this fast-paced industry.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Show more...
1 year ago
22 minutes 40 seconds

Artificially Unintelligent
Extra OpenAI Unleashed - OpenAI is Leaping Towards Personalized Models // Turbocharging GPT-4 with Capabilities // A Boatload of Model Releases

The recent developments at OpenAI indicate a significant advancement in the customization and application of GPT technology:

  1. Custom GPTs for B2C: Previously, users had to choose a model for specific tasks, but now ChatGPT automatically selects the optimal model. The concept of persona-based GPTs suggests a future where AI can switch contexts or "personalities" as needed. This could lead to an economy centered around creating AI-tailored content.
  2. Functionality Improvements: The ability to call multiple functions within a single user query greatly enhances the versatility of GPT models.
  3. Assistant API: This new stateful API can support various tools simultaneously, maintain extensive message histories, and manage files, which might enhance the user experience despite its simple retrieval algorithm and higher cost.
  4. GPT-4 Enhancements: There are significant updates with GPT-4, including fine-tuning costs, Whisper v3 integration, and APIs for various services like Dall-E 3 and GPT-4 Vision. Notably, GPT-4 Turbo offers updated knowledge, a larger context window, faster speeds, and lower costs.
  5. Code and JSON Compatibility: The new GPT-4 can interpret JSON, facilitating easier integration with other software, and includes an accessible Code Interpreter feature.
  6. Conversation Threads: The API now includes a feature for remembering conversation history, which simplifies development. But this might also anger a bunch of developers.
  7. Future Directions: Plans include orchestration for group interactions, multimodal capabilities like voice commands, bringing your own code execution environments, asynchronous support, a GPT store for sharing creations, and internal corporate GPT deployments.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Show more...
1 year ago
22 minutes 44 seconds

Artificially Unintelligent
E37 How can we prevent the misuse of pictures and stop deepfakes? A look at recent developments.

With manipulated content and misinformation more on the rise than ever, trust in the digital ecosystem has never been more critical. We are entering a new era of creativity, where generative AI is expanding access to powerful new workflows and unleashing our most imaginative ideas. The Leica M11-P launch will advance the CAI’s goal of empowering photographers everywhere to attach Content Credentials to their images at the point of capture, creating a chain of authenticity from camera to cloud and enabling photographers to maintain a degree of control over their art, story and context.

One way to prevent misuse of content is by proving its authenticity. This can be achieved by cryptographically signing the content at the point of creation. The signature cannot be tampered with without leaving evidence of the attempt.

Watermarking is another method to prevent misuse of content. Watermarks can be visible or invisible. Visible watermarks are typically used for copyright protection, while invisible watermarks are used for digital rights management.

Artificial Intelligence can also be used to protect content from misuse. For instance, MIT researchers have developed a tool called "PhotoGuard" that can help protect images from AI manipulation. The tool puts an invisible 'immunization' over images that stops AI models from being able to manipulate the picture.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn


Show more...
1 year ago
22 minutes 59 seconds

Artificially Unintelligent
E36 Untangling the Decision Making Process of Neural Networks - A Paper Deep Dive of Zoom In: An Introduction to Circuits

Mechanistic interpretability refers to understanding a model by looking at how its internal components function and interact with each other. It's about breaking down the model into its smallest functional parts and explaining how these parts come together to produce the model's outputs.

Neural networks are complex, making it hard to make broad, factual statements about their behavior. However, focusing on small, specific parts of neural networks, known as "circuits", might offer a way to rigorously investigate them. These "circuits" can be edited and analyzed in a falsifiable manner, making them potential foundations for understanding interpretability.

Zoom In looks at neural networks from a biological perspective looking at features and circuits to untangle their behaviors.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Show more...
2 years ago
26 minutes 31 seconds

Artificially Unintelligent
E35 How to control your ML experiments? A Look at Weights & Biases and MLFlow

Weights and Biases, is a machine learning platform that provides tools and infrastructure to help researchers, data scientists, and machine learning engineers track, visualize, and optimize their machine learning experiments. It is designed to make the development and deployment of machine learning models more efficient and effective.


MLflow is an open-source platform for managing the end-to-end machine learning lifecycle. It was developed by Databricks and is widely used in the data science and machine learning community. MLflow provides a unified interface for various components of the machine learning process, making it easier to manage, track, and reproduce machine learning experiments.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn


Show more...
2 years ago
27 minutes 6 seconds

Artificially Unintelligent
E34 AI Companies Flying Under the Radar

Welcome to today's podcast episode, where we shine a spotlight on some remarkable AI companies that have been flying under the radar. In this episode, we'll be exploring the innovative work and contributions of four companies that are making significant strides in the field of artificial intelligence: ARM, Broadcom, Snowflake, and dbt.

While these companies might not always grab the headlines like some of their more prominent counterparts, their impact on the AI landscape is undeniable. ARM, known for its cutting-edge semiconductor designs, plays a crucial role in powering AI-enabled devices and systems, providing the foundation for efficient and high-performance computing.

Broadcom, a global technology leader, has been quietly pushing the boundaries of AI with its advanced hardware and software solutions. From networking and connectivity to data center infrastructure, Broadcom's innovations are driving the AI revolution behind the scenes.

Snowflake, a rising star in the world of cloud data platforms, has been instrumental in enabling scalable and efficient AI-driven analytics. Its cloud-native architecture and data sharing capabilities empower organizations to harness the power of their data for AI applications, unlocking valuable insights and driving informed decision-making.

Last but not least, dbt (data build tool) has emerged as a vital player in the AI ecosystem. With its focus on transforming and modeling data, dbt provides a powerful framework for building scalable and maintainable data pipelines, a critical component in the AI workflow.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Show more...
2 years ago
24 minutes 19 seconds

Artificially Unintelligent
E33 A Path to Unlimited Generation? A First Look at Attention Sinks

Welcome to today's episode, where we're about to embark on an exciting journey into the latest research. In this episode, we'll be delving into a groundbreaking paper that has the potential to revolutionize the field of natural language processing. The paper introduces us to the concept of "Attention Sinks," a novel approach to improving the efficiency of inference with Large Language Models (LLMs) and extending their memory through a Key-Value (KV) Cache.

Traditionally, LLMs have faced challenges when it comes to handling large amounts of data and maintaining contextual information efficiently. However, the concept of Attention Sinks proposes a solution by introducing a mechanism to selectively store and retrieve relevant information during the inference process

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Show more...
2 years ago
21 minutes 43 seconds

Artificially Unintelligent
E32 How to Use Docker? AI Engineer's Toolbelt.

Today, we gonna take a look at one of the essential tools of any data scientist: Docker. We will show you how we think about it, what to use it, what it’s difference is to running code directly on your machine, and much more.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Show more...
2 years ago
22 minutes 18 seconds

Artificially Unintelligent
Eavesdrop in on chats between Nicolay, William, and their savvy friends about the latest in AI: new architectures, developments, and tools. It's like chilling with your techie friends at the bar downing a few beers.