Google’s ‘TPU’ chip puts OpenAI on alert and shakes Nvidia investors

The origins of Google’s TPU date back to an internal presentation in 2013 by Jeff Dean, Google’s long-serving chief scientist, following a breakthrough in using deep neural networks to improve its speech recognition systems. 

“The first slide was: Good news! Machine learning finally works,” said Jonathan Ross, a Google hardware engineer at the time. “Slide number two said: “Bad news, we can’t afford it.”

Dean calculated that if Google’s hundreds of millions of consumers used voice search for just three minutes a day, the company would have to double its data-centre footprint just to serve that function — at a cost of tens of billions of dollars. 

→ Financial Times