This talk has become hugely influential, passed around both inside of Google and Squarespace, and now making its way throughout the software engineering world. Here’s the author, Tanya Reilly, on its genesis:
Being Glue originated as a comment on an internal Google+ post when I worked at Google. I’d used the expression “glue work” in passing, and someone asked what I meant by it. The reply became a standalone post and then an internal document (which as far as I know is still being circulated).
“Glue work” is a term that Tanya coined to describe all of that work that is so critical on medium-to-large technical teams but often goes under-appreciated (and under-promoted): communication, planning, project management, documentation… Recognizing the value of this work is critical for the success of teams and for the career paths of those whose efforts it describes (often disproportionately women).
This effect is not isolated to software engineering. Data is now a technical field, and as we start to figure out the (still in flux!) career paths for our own roles this will be an increasingly important topic.
What if you don’t need version B to be better than version A?
This is an amazing post that I’m surprised I’ve never seen written before. It goes fairly deep into the math, but you don’t need to follow it there—the most important part is building the intuition.
Most A/B tests are in the service of conversion optimization: making your website push users to achieve some quantifiable goal more effectively. We therefore want to set up a statistical test to conclude that the new version is superior to the old version. But there are many instances where what you want to do is prove that the new version is no worse than the old version. This is not covered in the standard “Implement Optimizely and go to town” playbook.
If you’ve ever been involved in A/B testing, you’ll likely have run across these scenarios. I have, often. This is the best post I’ve ever seen outlining how to effectively construct a test for them.
This is fascinating. Adversarial examples—images that have been modified specifically to trick an algorithm but that are undetectably different from the original by a human—have always felt interesting to me. Their existence, and the ease with which they can be generated, always seemed to point to something worthwhile. Turns out, that instinct was right. From the paper:
We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans.
AWS continues to have more of the market than Azure and GCP combined, but the others are growing fast. The cloud you choose has major implications for your available toolset—fewer companies are open to going multi-cloud these days.