Google’s ‘TPU’ chip puts OpenAI on alert and shakes Nvidia investors

The origins of Google’s TPU date back to an internal presentation in 2013 by Jeff Dean, Google’s long-serving chief scientist, following a breakthrough in using deep neural networks to improve its speech recognition systems. 

“The first slide was: Good news! Machine learning finally works,” said Jonathan Ross, a Google hardware engineer at the time. “Slide number two said: “Bad news, we can’t afford it.”

Dean calculated that if Google’s hundreds of millions of consumers used voice search for just three minutes a day, the company would have to double its data-centre footprint just to serve that function — at a cost of tens of billions of dollars. 

→ Financial Times

Google’s AI Image Generator: No One’s Ready For This

Before / After Google’s Magic Editor

We briefly lived in an era in which the photograph was a shortcut to reality, to knowing things, to having a smoking gun. It was an extraordinarily useful tool for navigating the world around us. We are now leaping headfirst into a future in which reality is simply less knowable. The lost Library of Alexandria could have fit onto the microSD card in my Nintendo Switch, and yet the cutting edge of technology is a handheld telephone that spews lies as a fun little bonus feature. 

We are fucked.

→ The Verge

In Two Moves, AlphaGo and Lee Sedol Redefined the Future

Poignant documentary about the Lee Sedol versus the machine.

The symmetry of these two moves is more beautiful than anything else. One-in-ten-thousand and one-in-ten-thousand. This is what we should all take away from these astounding seven days. Hassabis and Silver and their fellow researchers have built a machine capable of something super-human. But at the same time, it’s flawed. It can’t do everything we humans can do. In fact, it can’t even come close. It can’t carry on a conversation. It can’t play charades. It can’t pass an eighth grade science test. It can’t account for God’s Touch.

But think about what happens when you put these two things together. Human and machine. Fan Hui will tell you that after five months of playing match after match with AlphaGo, he sees the game completely differently. His world ranking has skyrocketed. And apparently, Lee Sedol feels the same way. Hassabis says that he and the Korean met after Game Four, and that Lee Sedol echoed the words of Fan Hui. Just these few matches with AlphaGo, the Korean told Hassabis, have opened his eyes.

This isn’t human versus machine. It’s human and machine. Move 37 was beyond what any of us could fathom. But then came Move 78. And we have to ask: If Lee Sedol hadn’t played those first three games against AlphaGo, would he have found God’s Touch? The machine that defeated him had also helped him find the way.

→ Wired

One Nation, Tracked

One search turned up more than a dozen people visiting the Playboy Mansion, some overnight. Without much effort we spotted visitors to the estates of Johnny Depp, Tiger Woods and Arnold Schwarzenegger, connecting the devices’ owners to the residences indefinitely.

If you lived in one of the cities the dataset covers and use apps that share your location — anything from weather apps to local news apps to coupon savers — you could be in there, too.

If you could see the full trove, you might never use your phone the same way again.

→ The New York Times

Why CAPTCHA Have Gotten So Difficult

Because CAPTCHA is such an elegant tool for training AI, any given test could only ever be temporary, something its inventors acknowledged at the outset. With all those researchers, scammers, and ordinary humans solving billions of puzzles just at the threshold of what AI can do, at some point the machines were going to pass us by. In 2014, Google pitted one of its machine learning algorithms against humans in solving the most distorted text CAPTCHAs: the computer got the test right 99.8 percent of the time, while the humans got a mere 33 percent.

→ The Verge