We can build a future we want to live in, or we can build a nightmare. The choice is up to us.
This article has really been making the rounds this week and so I wanted to make sure to link to it. It’s a worthwhile read, and the topic is very important to the industry. I have a bit of a contrarian take on it, so please bear with me if I say just a bit more about this piece than usual.
Let me first just say that the three authors—Hillary Mason, DJ Patil, and Mike Loukides—are all much smarter and more plugged-in than I am. The article is thoughtful, and it rightly points out the futures our increasing data sophistication make possible. I agree with their sense of urgency.
Even so, I just don’t know that I agree with the prescriptions present in the article. It is easy to recommend that training programs teach data ethics, that companies have guiding principles for data ethics and bake it into their corporate cultures. The issue is that not that these recommendations are wrong, it is that they are fundamentally insufficient—enough so that they don’t meaningfully address the problem.
The issue here is incentives. Data tech today is asymmetric and opaque: a single party owns a data set and applies it in ways that are largely unknown to outsiders. This asymmetry paired with data’s economies of scale has created one of the most valuable sources of competitive differentiation in the history of capitalism. The incentives for the data owner to use their data with dubious ethical standards—or to fail to consider ethics altogether—are tremendous.
The power of John D. Rockefeller wasn’t counterbalanced by a growing culture of ethics. It was eventually restrained by powerful anti-trust legislation and the breakup of Standard Oil. It’s not clear (to me) exactly what is required to constrain the actions of data owners, but I’m fairly sure it will be rather more extreme than this post suggests.