43 rules that Google engineers have learned while implementing some of the most sophisticated and widely used machine learning models in the world, as written by a Google engineer. Here are a few examples I particularly liked:
Don’t be afraid to launch a product without machine learning.
Don’t overthink which objective you choose to directly optimize.
This post is awesome. The team at Retention Sciences has been optimizing their recommendation algorithm for two years and walks through their process and its improvements month by month. It’s fascinating to see their algorithm grow in sophistication from a 14-line SQL statement to a sophisticated set of algorithms that output a tested and optimized algorithm for each of their 30 clients.
There are five basic styles of recommenders. In order to understand any of the five, you need to understand what’s going on inside the box. This article walks through each of the five in enough detail to paint you a very solid mental model.
This post pairs very well with the Retention Sciences post above, as you can actually see the team at RS move along the path from one recommender type to the next.
The author, a PhD student focused on deep learning, points out some disappointing, although perhaps not surprising, flaws in its research community:
…we’re under-appreciating the fact that we’re dealing with pure software. That sounds obvious, but it’s actually a big deal. Setting up tightly controlled experiments in fields like medicine or psychology is almost impossible and involves an extraordinary amount of work. With software it’s essentially free. It’s more unique than most of us realize. But we’re just not doing it.
The solution? Good old fashioned engineering. Writing and sharing high quality, documented code.