Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
History
Education
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
Loading...
0:00 / 0:00
Podjoint Logo
US
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/75/db/da/75dbda7a-9c02-a923-c9a8-ac50c4a94f59/mza_7528638632772517919.jpg/600x600bb.jpg
Data Science Decoded
Mike E
29 episodes
3 days ago
We discuss seminal mathematical papers (sometimes really old 😎 ) that have shaped and established the fields of machine learning and data science as we know them today. The goal of the podcast is to introduce you to the evolution of these fields from a mathematical and slightly philosophical perspective. We will discuss the contribution of these papers, not just from pure a math aspect but also how they influenced the discourse in the field, which areas were opened up as a result, and so on. Our podcast episodes are also available on our youtube: https://youtu.be/wThcXx_vXjQ?si=vnMfs
Show more...
Mathematics
Science
RSS
All content for Data Science Decoded is the property of Mike E and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
We discuss seminal mathematical papers (sometimes really old 😎 ) that have shaped and established the fields of machine learning and data science as we know them today. The goal of the podcast is to introduce you to the evolution of these fields from a mathematical and slightly philosophical perspective. We will discuss the contribution of these papers, not just from pure a math aspect but also how they influenced the discourse in the field, which areas were opened up as a result, and so on. Our podcast episodes are also available on our youtube: https://youtu.be/wThcXx_vXjQ?si=vnMfs
Show more...
Mathematics
Science
Episodes (20/29)
Data Science Decoded
Data Science #29 - The Chi-square automatic interaction detection(CHAID) algorithm (1979)

In the 29th episode, we go over the 1979 paper by Gordon Vivian Kass that introduced the CHAID algorithm.CHAID (Chi-squared Automatic Interaction Detection) is a tree-based partitioning method introduced by G. V. Kass for exploring large categorical data sets by iteratively splitting records into mutually exclusive, exhaustive subsets based on the most statistically significant predictors rather than maximal explanatory power.

Unlike its predecessor, AID, CHAID embeds each split in a chi-squared significance test (with Bonferroni‐corrected thresholds), allows multi-way divisions, and handles missing or “floating” categories gracefully.In practice, CHAID proceeds by merging predictor categories that are least distinguishable (stepwise grouping) and then testing whether any compound categories merit a further split, ensuring parsimonious, stable groupings without overfitting.


Through its significance‐driven, multi-way splitting and built-in bias correction against predictors with many levels, CHAID yields intuitive decision trees that highlight the strongest associations in high-dimensional categorical data In modern data science, CHAID’s core ideas underpin contemporary decision‐tree algorithms (e.g., CART, C4.5) and ensemble methods like random forests, where statistical rigor in splitting criteria and robust handling of missing data remain critical. Its emphasis on automated, hypothesis‐driven partitioning has influenced automated feature selection, interpretable machine learning, and scalable analytics workflows that transform raw categorical variables into actionable insights.

Show more...
3 days ago
41 minutes 3 seconds

Data Science Decoded
Data Science #28 - The Bloom filter algorithm

In the 28th episode, we go over Burton Bloom's Bloom filter from 1970, a groundbreaking data structure that enables fast, space-efficient set membership checks by allowing a small, controllable rate of false positives.Unlike traditional methods that store full data, Bloom filters use a compact bit array and multiple hash functions, trading exactness for speed and memory savings.


This idea transformed modern data science and big data systems, powering tools like Apache Spark, Cassandra, and Kafka, where fast filtering and memory efficiency are critical for performance at scale.

Show more...
3 days ago
39 minutes 15 seconds

Data Science Decoded
Data Science #27 - The History of Least Squares (1877)

Mansfield Merriman's 1877 paper traces the historical development of the Method of Least Squares, crediting Legendre (1805) for introducing the method, Adrain (1808) for the first formal probabilistic proof, and Gauss (1809) for linking it to the normal distribution.


He evaluates multiple proofs, including Laplace’s (1810) general probability-based derivation, and highlights later refinements by various mathematicians.


The paper underscores the method’s fundamental role in statistical estimation, probability theory, and error minimization, solidifying its place in scientific and engineering applications.

Show more...
1 month ago
32 minutes 9 seconds

Data Science Decoded
Data Science #26 - The First Gradient decent algorithm by Cauchy (1847)

In this episode, we review Cauchy’s 1847 paper, which introduced an iterative method for solving simultaneous equations by minimizing a function using its partial derivatives. Instead of elimination, he proposed progressively reducing the function’s value through small updates, forming an early version of gradient descent. His approach allowed systematic approximation of solutions, influencing numerical optimization.This work laid the foundation for machine learning and AI, where gradient-based methods are essential. Modern stochastic gradient descent (SGD) and deep learning training algorithms follow Cauchy’s principle of stepwise minimization. His ideas power optimization in neural networks, making AI training efficient and scalable.

Show more...
2 months ago
33 minutes 14 seconds

Data Science Decoded
Data Science #24 - The Expectation Maximization (EM) algorithm Paper review (1977)

At the 24th episode we go over the paper titled: Dempster, Arthur P., Nan M. Laird, and Donald B. Rubin. "Maximum likelihood from incomplete data via the EM algorithm." Journal of the royal statistical society: series B (methodological) 39.1 (1977): 1-22. The Expectation-Maximization (EM) algorithm is an iterative method for finding Maximum Likelihood Estimates (MLEs) when data is incomplete or contains latent variables. It alternates between the E-step, where it computes the expected value of the missing data given current parameter estimates, and the M-step, where it maximizes the expected complete-data log-likelihood to update the parameters.


This process repeats until convergence, ensuring a monotonic increase in the likelihood function. EM is widely used in statistics and machine learning, especially in Gaussian Mixture Models (GMMs), hidden Markov models (HMMs), and missing data imputation.


Its ability to handle incomplete data makes it invaluable for problems in clustering, anomaly detection, and probabilistic modeling. The algorithm guarantees stable convergence, though it may reach local maxima, depending on initialization. In modern data science and AI, EM has had a profound impact, enabling unsupervised learning in natural language processing (NLP), computer vision, and speech recognition.


It serves as a foundation for probabilistic graphical models like Bayesian networks and Variational Inference, which power applications such as chatbots, recommendation systems, and deep generative models.


Its iterative nature has also inspired optimization techniques in deep learning, such as Expectation-Maximization inspired variational autoencoders (VAEs), demonstrating its ongoing influence in AI advancements.

Show more...
3 months ago
32 minutes 47 seconds

Data Science Decoded
Data Science #23- The Markov Chain Monte Carl MCMC Paper review (1953)

In the 23rd episode we review the The 1953 paper Metropolis, Nicholas, et al. "Equation of state calculations by fast computing machines."

The journal of chemical physics 21.6 (1953): 1087-1092 which introduced the Monte Carlo method for simulating molecular systems, particularly focusing on two-dimensional rigid-sphere models.

The study used random sampling to compute equilibrium properties like pressure and density, demonstrating a feasible approach for solving analytically intractable statistical mechanics problems. The work pioneered the Metropolis algorithm, a key development in what later became known as Markov Chain Monte Carlo (MCMC) methods.

By validating the Monte Carlo technique against free volume theories and virial expansions, the study showcased its accuracy and set the stage for MCMC as a powerful tool for exploring complex probability distributions. This breakthrough has had a profound impact on modern AI and ML, where MCMC methods are now central to probabilistic modeling, Bayesian inference, and optimization.

These techniques enable applications like generative models, reinforcement learning, and neural network training, supporting the development of robust, data-driven AI systems.


Youtube: https://www.youtube.com/watch?v=gWOawt7hc88&t

Show more...
4 months ago
37 minutes 54 seconds

Data Science Decoded
Data Science #22 - The theory of dynamic programming, Paper review 1954
We review Richard Bellman's "The Theory of Dynamic Programming" paper from 1954 which revolutionized how we approach complex decision-making problems through two key innovations. First, his Principle of Optimality established that optimal solutions have a recursive structure - each sub-decision must be optimal given the state resulting from previous decisions. Second, he introduced the concept of focusing on immediate states rather than complete historical sequences, providing a practical way to tackle what he termed the "curse of dimensionality." These foundational ideas directly shaped modern artificial intelligence, particularly reinforcement learning. The mathematical framework Bellman developed - breaking complex problems into smaller, manageable subproblems and making decisions based on current state - underpins many contemporary AI achievements, from game-playing agents like AlphaGo to autonomous systems and robotics. His work essentially created the theoretical backbone that enables modern AI systems to handle sequential decision-making under uncertainty. The principles established in this 1954 paper continue to influence how we design AI systems today, particularly in reinforcement learning and neural network architectures dealing with sequential decision problems.
Show more...
4 months ago
47 minutes 46 seconds

Data Science Decoded
Data Science #21 - Steps Toward Artificial Intelligence
In the 1st episode of the second season we review the legendary Marvin Minsky's "Steps Toward Artificial Intelligence" from 1961. Itis a foundational work in the field of AI that outlines the challenges and methodologies for developing intelligent problem-solving systems. The paper categorizes AI challenges into five key areas: Search, Pattern Recognition, Learning, Planning, and Induction. It emphasizes how computers, limited by their ability to perform only programmed actions, can enhance problem-solving efficiency through heuristic methods, learning from patterns, and planning solutions to narrow down possible options. The significance of this work lies in its conceptual framework, which established a systematic approach to AI development. Minsky highlighted the need for machines to mimic cognitive functions like recognizing patterns and learning from experience, which form the basis of modern machine learning algorithms. His emphasis on heuristic methods provided a pathway to make computational processes more efficient and adaptive by reducing exhaustive searches and using past data to refine problem-solving strategies. The paper is pivotal as it set the stage for advancements in AI by introducing the integration of planning, adaptive learning, and pattern recognition into computational systems. Minsky's insights continue to influence AI research and development, including neural networks, reinforcement learning, and autonomous systems, bridging theoretical exploration and practical applications in the quest for artificial intelligence.
Show more...
5 months ago
59 minutes 39 seconds

Data Science Decoded
Data Science #20 - the Rao-Cramer bound (1945)
In the 20th episode, we review the seminal paper by Rao which introduced the Cramer Rao bound: Rao, Calyampudi Radakrishna (1945). "Information and the accuracy attainable in the estimation of statistical parameters". Bulletin of the Calcutta Mathematical Society. 37. Calcutta Mathematical Society: 81–89. The CramĂ©r-Rao Bound (CRB) sets a theoretical lower limit on the variance of any unbiased estimator for a parameter. It is derived from the Fisher information, which quantifies how much the data tells us about the parameter. This bound provides a benchmark for assessing the precision of estimators and helps identify efficient estimators that achieve this minimum variance. The CRB connects to key statistical concepts we have covered previously: Consistency: Estimators approach the true parameter as the sample size grows, ensuring they become arbitrarily accurate in the limit. While consistency guarantees convergence, it does not necessarily imply the estimator achieves the CRB in finite samples. Efficiency: An estimator is efficient if it reaches the CRB, minimizing variance while remaining unbiased. Efficiency represents the optimal use of data to achieve the smallest possible estimation error. Sufficiency: Working with sufficient statistics ensures no loss of information about the parameter, increasing the chances of achieving the CRB. Additionally, the CRB relates to KL divergence, as Fisher information reflects the curvature of the likelihood function and the divergence between true and estimated distributions. In modern DD and AI, the CRB plays a foundational role in uncertainty quantification, probabilistic modeling, and optimization. It informs the design of Bayesian inference systems, regularized estimators, and gradient-based methods like natural gradient descent. By highlighting the tradeoffs between bias, variance, and information, the CRB provides theoretical guidance for building efficient and robust machine learning models
Show more...
5 months ago
59 minutes 42 seconds

Data Science Decoded
Data Science #19 - The Kullback–Leibler divergence paper (1951)

In this episode with go over the Kullback-Leibler (KL) divergence paper, "On Information and Sufficiency" (1951). It introduced a measure of the difference between two probability distributions, quantifying the cost of assuming one distribution when another is true.


This concept, rooted in Shannon's information theory (which we reviewed in previous episodes), became fundamental in hypothesis testing, model evaluation, and statistical inference. KL divergence has profoundly impacted data science and AI, forming the basis for techniques like maximum likelihood estimation, Bayesian inference, and generative models such as variational autoencoders (VAEs).


It measures distributional differences, enabling optimization in clustering, density estimation, and natural language processing. In AI, KL divergence ensures models generalize well by aligning training and real-world data distributions. Its role in probabilistic reasoning and adaptive decision-making bridges theoretical information theory and practical machine learning, cementing its relevance in modern technologies.

Show more...
5 months ago
52 minutes 41 seconds

Data Science Decoded
Data Science #18 - The k-nearest neighbors algorithm (1951)

In the 18th episode we go over the original k-nearest neighbors algorithm; Fix, Evelyn; Hodges, Joseph L. (1951). Discriminatory Analysis. Nonparametric Discrimination: Consistency Properties USAF School of Aviation Medicine, Randolph Field, Texas They introduces a nonparametric method for classifying a new observation 𝑧 z as belonging to one of two distributions, đč F or đș G, without assuming specific parametric forms. Using 𝑘 k-nearest neighbor density estimates, the paper implements a likelihood ratio test for classification and rigorously proves the method's consistency.


The work is a precursor to the modern 𝑘 k-Nearest Neighbors (KNN) algorithm and established nonparametric approaches as viable alternatives to parametric methods. Its focus on consistency and data-driven learning influenced many modern machine learning techniques, including kernel density estimation and decision trees.


This paper's impact on data science is significant, introducing concepts like neighborhood-based learning and flexible discrimination.


These ideas underpin algorithms widely used today in healthcare, finance, and artificial intelligence, where robust and interpretable models are critical.

Show more...
6 months ago
44 minutes 1 second

Data Science Decoded
Data Science #17 - The Monte Carlo Algorithm (1949)
We review the original Monte Carlo paper from 1949 by Metropolis, Nicholas, and Stanislaw Ulam. "The monte carlo method." Journal of the American statistical association 44.247 (1949): 335-341. The Monte Carlo method uses random sampling to approximate solutions for problems that are too complex for analytical methods, such as integration, optimization, and simulation. Its power lies in leveraging randomness to solve high-dimensional and nonlinear problems, making it a fundamental tool in computational science. In modern data science and AI, Monte Carlo drives key techniques like Bayesian inference (via MCMC) for probabilistic modeling, reinforcement learning for policy evaluation, and uncertainty quantification in predictions. It is essential for handling intractable computations in machine learning and AI systems. By combining scalability and flexibility, Monte Carlo methods enable breakthroughs in areas like natural language processing, computer vision, and autonomous systems. Its ability to approximate solutions underpins advancements in probabilistic reasoning, decision-making, and optimization in the era of AI and big data.
Show more...
6 months ago
38 minutes 11 seconds

Data Science Decoded
Data Science #16 - The First Stochastic Descent Algorithm (1952)

In the 16th episode we go over the seminal the 1952 paper titled: "A stochastic approximation method." The annals of mathematical statistics (1951): 400-407, by Robbins, Herbert and Sutton Monro. The paper introduced the stochastic approximation method, a groundbreaking iterative technique for finding the root of an unknown function using noisy observations.


This method enabled real-time, adaptive estimation without requiring the function’s explicit form, revolutionizing statistical practices in fields like bioassay and engineering. Robbins and Monro’s work laid the ideas behind stochastic gradient descent (SGD), the primary optimization algorithm in modern machine learning and deep learning. SGD’s efficiency in training neural networks through iterative updates is directly rooted in this method.


Additionally, their approach to handling binary feedback inspired early concepts in reinforcement learning, where algorithms learn from sparse rewards and adapt over time. The paper's principles are fundamental to nonparametric methods, online learning, and dynamic optimization in data science and AI today.


By enabling sequential, probabilistic updates, the Robbins-Monro method supports adaptive decision-making in real-time applications such as recommender systems, autonomous systems, and financial trading, making it a cornerstone of modern AI’s ability to learn in complex, uncertain environments.

Show more...
6 months ago
42 minutes 20 seconds

Data Science Decoded
Data Science #15 - The First Decision Tree Algorithm (1963)

the 15th episode we went over the paper "Problems in the Analysis of Survey Data, and a Proposal" by James N. Morgan and John A. Sonquist from 1963. It highlights seven key issues in analyzing complex survey data, such as high dimensionality, categorical variables, measurement errors, sample variability, intercorrelations, interaction effects, and causal chains.


These challenges complicate efforts to draw meaningful conclusions about relationships between factors like income, education, and occupation. To address these problems, the authors propose a method that sequentially splits data by identifying features that reduce unexplained variance, much like modern decision trees.


The method focuses on maximizing explained variance (SSE), capturing interaction effects, and accounting for sample variability.


It handles both categorical and continuous variables while respecting logical causal priorities. This paper has had a significant influence on modern data science and AI, laying the groundwork for decision trees, CART, random forests, and boosting algorithms.


Its method of splitting data to reduce error, handle interactions, and respect feature hierarchies is foundational in many machine learning models used today. Link to full paper at our website:

https://datasciencedecodedpodcast.com/episode-15-the-first-decision-tree-algorithm-1963

Show more...
7 months ago
36 minutes 35 seconds

Data Science Decoded
Data Science #14 - The original k-means algorithm paper review (1957)

At the 14th episode we go over the Stuart Lloyd's 1957 paper, "Least Squares Quantization in PCM," (which was published only at 1982) The k-means algorithm can be traced back to this paper. Loyd introduces an approach to quantization in pulse-code modulation (PCM). Which is like a 1-D k means clustering. Lloyd discusses how quantization intervals and corresponding quantum values should be adjusted based on signal amplitude distributions to minimize noise, improving efficiency in PCM systems.


He derives an optimization framework that minimizes quantization noise under finite quantization schemes. Lloyd’s algorithm bears significant resemblance to the k-means clustering algorithm, both seeking to minimize a sum of squared errors.

In Lloyd's method, the quantization process is analogous to assigning data points (signal amplitudes) to clusters (quantization intervals) based on proximity to centroids (quantum values), with the centroids updated iteratively based on the mean of the assigned points.

This iterative process of recalculating quantization values mirrors k-means’ recalculation of cluster centroids. While Lloyd’s work focuses on signal processing in telecommunications, its underlying principles of optimizing quantization have clear parallels with the k-means method used in clustering tasks in data science. The paper's influence on modern data science is profound. Lloyd's algorithm not only laid the groundwork for k-means but also provided a fundamental understanding of quantization error minimization, critical in fields such as machine learning, image compression, and signal processing.


The algorithm's simplicity, combined with its iterative nature, has led to its wide adoption in various data science applications. Lloyd's work remains a cornerstone in both the theory of clustering algorithms and practical applications in signal and data compression technologies.

Show more...
7 months ago
46 minutes 57 seconds

Data Science Decoded
Data Science #13 - Kolmogorov complexity paper review (1965) - Part 2

In the 14th episode we review the second part of Kolmogorov's seminal paper: Three approaches to the quantitative definition of information’." Problems of information transmission 1.1 (1965): 1-7. The paper introduces algorithmic complexity (or Kolmogorov complexity), which measures the amount of information in an object based on the length of the shortest program that can describe it.

This shifts focus from Shannon entropy, which measures uncertainty probabilistically, to understanding the complexity of structured objects.


Kolmogorov argues that systems like texts or biological data, governed by rules and patterns, are better analyzed by their compressibility—how efficiently they can be described—rather than by random probabilistic models. In modern data science and AI, these ideas are crucial. Machine learning models, like neural networks, aim to compress data into efficient representations to generalize and predict. Kolmogorov complexity underpins the idea of minimizing model complexity while preserving key information, which is essential for preventing overfitting and improving generalization.


In AI, tasks such as text generation and data compression directly apply Kolmogorov's concept of finding the most compact representation, making his work foundational for building efficient, powerful models. This is part 2 out of 2 episodes covering this paper (the first one is in Episode 12).

Show more...
7 months ago
29 minutes 25 seconds

Data Science Decoded
Data Science #12 - Kolmogorov complexity paper review (1965) - Part 1

In the 12th episode we review the first part of Kolmogorov's seminal paper:

"3 approaches to the quantitative definition of information’." Problems of information transmission 1.1 (1965): 1-7. The paper introduces algorithmic complexity (or Kolmogorov complexity), which measures the amount of information in an object based on the length of the shortest program that can describe it.

This shifts focus from Shannon entropy, which measures uncertainty probabilistically, to understanding the complexity of structured objects.


Kolmogorov argues that systems like texts or biological data, governed by rules and patterns, are better analyzed by their compressibility—how efficiently they can be described—rather than by random probabilistic models. In modern data science and AI, these ideas are crucial. Machine learning models, like neural networks, aim to compress data into efficient representations to generalize and predict. Kolmogorov complexity underpins the idea of minimizing model complexity while preserving key information, which is essential for preventing overfitting and improving generalization.

In AI, tasks such as text generation and data compression directly apply Kolmogorov's concept of finding the most compact representation, making his work foundational for building efficient, powerful models. This is part 1 out of 2 episodes covering this paper

Show more...
8 months ago
38 minutes 53 seconds

Data Science Decoded
Data Science #11 - The original Perceptron paper by Frank Rosenblatt (1958)

Frank Rosenblatt's 1958 paper, "The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain," introduces the perceptron, an early neural network model inspired by how the brain stores and processes information. Rosenblatt explores two theories: one where sensory data is stored as coded representations, and another, which he advocates, where learning occurs through forming new neural connections.


The perceptron illustrates this connectionist approach by mimicking how neurons process input and reinforce connections based on experience. The perceptron operates by passing sensory input through a network of neurons, where weights on connections adjust with each stimulus, enabling the system to recognize patterns and classify information. Rosenblatt emphasizes the probabilistic nature of learning in the perceptron, which mirrors how biological systems might generalize and adapt based on exposure to different stimuli. His model serves as a theoretical framework for understanding both biological and artificial neural systems. The paper's significance to modern data science lies in its foundational role in developing machine learning. The perceptron model directly influenced the creation of more advanced neural networks, including today's deep learning models.


Though limited in handling complex, non-linear data, the perceptron established key principles—such as weighted connections and learning from data.

Show more...
8 months ago
1 hour 3 minutes 29 seconds

Data Science Decoded
Data Science #10 - The original principal component analysis (PCA) paper by Harold Hotelling (1935)

Hotelling, Harold. "Analysis of a complex of statistical variables into principal components." Journal of educational psychology 24.6 (1933): 417.


This seminal work by Harold Hotelling on PCA remains highly relevant to modern data science because PCA is still widely used for dimensionality reduction, feature extraction, and data visualization. The foundational concepts of eigenvalue decomposition and maximizing variance in orthogonal directions form the backbone of PCA, which is now automated through numerical methods such as Singular Value Decomposition (SVD). Modern PCA handles much larger datasets with advanced variants (e.g., Kernel PCA, Sparse PCA), but the core ideas from the paper—identifying and interpreting key components to reduce dimensionality while preserving the most important information—are still crucial in handling high-dimensional data efficiently today.

Show more...
8 months ago
55 minutes 41 seconds

Data Science Decoded
Data Science #9 - The Unreasonable Effectiveness of Mathematics in Natural Sciences, Eugene Wigner

In this special episode, Daniel Aronovich joins forces with the 632 nm podcast. In this timeless paper Wigner reflects on how mathematical concepts, often developed independently of any concern for the physical world, turn out to be remarkably effective in describing natural phenomena.


This effectiveness is "unreasonable" because there is no clear reason why abstract mathematical constructs should align so well with the laws governing the universe. Full paper is at our website:

https://datasciencedecodedpodcast.com/episode-9-the-unreasonable-effectiveness-of-mathematics-in-natural-sciences-eugene-wigner-1960

Show more...
8 months ago
1 hour 24 minutes 32 seconds

Data Science Decoded
We discuss seminal mathematical papers (sometimes really old 😎 ) that have shaped and established the fields of machine learning and data science as we know them today. The goal of the podcast is to introduce you to the evolution of these fields from a mathematical and slightly philosophical perspective. We will discuss the contribution of these papers, not just from pure a math aspect but also how they influenced the discourse in the field, which areas were opened up as a result, and so on. Our podcast episodes are also available on our youtube: https://youtu.be/wThcXx_vXjQ?si=vnMfs