Data Science DecodedAuthor: Mike E
We discuss seminal mathematical papers (sometimes really old ) that have shaped and established the fields of machine learning and data science as we know them today. The goal of the podcast is to introduce you to the evolution of these fields from a mathematical and slightly philosophical perspective. We will discuss the contribution of these papers, not just from pure a math aspect but also how they influenced the discourse in the field, which areas were opened up as a result, and so on. Our podcast episodes are also available on our youtube: https://youtu.be/wThcXx_vXjQ?sivnMfs Language: en Genres: Mathematics, Science Contact email: Get it Feed URL: Get it iTunes ID: Get it |
Listen Now...
Data Science #18 - The k-nearest neighbors algorithm (1951)
Episode 20
Monday, 25 November, 2024
In the 18th episode we go over the original k-nearest neighbors algorithm; Fix, Evelyn; Hodges, Joseph L. (1951). Discriminatory Analysis. Nonparametric Discrimination: Consistency Properties USAF School of Aviation Medicine, Randolph Field, Texas They introduces a nonparametric method for classifying a new observation 𝑧 z as belonging to one of two distributions, 𝐹 F or 𝐺 G, without assuming specific parametric forms. Using 𝑘 k-nearest neighbor density estimates, the paper implements a likelihood ratio test for classification and rigorously proves the method's consistency. The work is a precursor to the modern 𝑘 k-Nearest Neighbors (KNN) algorithm and established nonparametric approaches as viable alternatives to parametric methods. Its focus on consistency and data-driven learning influenced many modern machine learning techniques, including kernel density estimation and decision trees. This paper's impact on data science is significant, introducing concepts like neighborhood-based learning and flexible discrimination. These ideas underpin algorithms widely used today in healthcare, finance, and artificial intelligence, where robust and interpretable models are critical.