Antoine Broyelle

Recent Posts

  • February 06, 2022

    Learning to Land on Mars with Reinforcement Learning

    Once in a while, a friend who is either learning to code or tackling complex problems drags me back to CodinGame However, I tend to put one type of challenge aside: optimal control systems. This kind of problem often requires hand-crafting a good cost function and modeling the transition dynamics. But what if we could solve the challenge without coding a control policy? This is the story of how I landed a rover on Mars using reinforcement learning.

  • September 26, 2021

    Why isn't Active Learning Widely Adopted?

    Recently, I spent quite some time learning and playing with active learning (AL). The claim of active learning is quite appealing: a model can achieve the same performances—or even superior performance— with fewer annotated samples if it can select which specific instances need to be labeled. Sounds promising, right? In this article, I reflect on the promises of active learning to show that the picture is not all bright. Active learning has a few drawbacks; however, it represents a viable solution when building datasets at scale or when tackling the wide diversity of edge cases.

  • March 21, 2021

    Relationship Between Machine Learning Methods

    Back when I was doing my master in Data Science and Cognitive Systems at KTH I had an interesting conversion about the number of algorithms a data scientist should know to do his work. My friend was baffled by the number of techniques that we had to learn for the exams. As a rather lazy student, I was a bit perplex too. The truth is that many methods and tricks can be used within widely different models.

  • February 12, 2021

    K-Nearest Neighbors (KNN) - Visualizing the Variance

    In chapter 2 of The Elements of Statistical Learning, the authors compare least square regression and nearest neighbors in terms of bias and variance. I thought it would be interesting to have an interactive visualization to understand why k-NN is said to have low bias and high variance.

  • January 27, 2021

    Deep (Deep, Deep) Dive into K-Means Clustering

    Recently, my interest in outlier detection and clustering techniques spiked. The rabbit hole was deeper than anticipated! Let's first talk about k-means clustering. For completeness, I provide a high-level description of the algorithm, some step-by-step animations, a few equations for math lovers, and a Python implementation using Numpy. Finally, I cover the limitations and variants of k-means. You can scroll to the end if you want to test your understanding with some typical interview questions.

  • May 17, 2020

    Convolution Interactive Sandbox

    Play around convolutions with this interactive visualization. Choose different functions, swap kernels and explore the influences of each parameters.