The author has built an impressive set of benchmarks comparing Theano, TensorFlow, and CNTK, running on three different GPUs. His summary:
The accuracies of Theano, TensorFlow and CNTK backends are similar across all benchmark tests, while speeds vary a lot.
Relevant if you’re making production decisions today, but potentially more so to follow the evolution of the space. In other high-level languages, the broad trend is to sacrifice execution efficiency for programmer efficiency. With the intense computational needs of deep learning, it’s not clear that things will play out the same way.