A long-form Backchannel post by Steven Levy gives a fascinating insight into Google’s vision of the future of machine-learning. While it’s currently a specialist field, Google believes that one day it will be used by all software engineers no matter what the field, and that it will ‘change humanity.’
Google is starting small. It invites just 18 software engineers a year to join its Machine Learning Ninja Program, where they work alongside expert mentors for six months before going back to apply the approach to their own work. But Google’s machine-learning leader Jeff Dean estimates that around 10% of its 25,000 developers are proficient in the field, and he’d like that number to be 100%.
What’s notable is that all involved, from those in the Ninja program to the company’s key experts in the field, see machine-learning as something transformative …
New Ninja participant Carson Holgate has this to say:
For many years, machine learning was considered a specialty, limited to an elite few. That era is over, as recent results indicate that machine learning, powered by “neural nets” that emulate the way a biological brain operates, is the true path towards imbuing computers with the powers of humans, and in some cases, super humans.
Google CEO Sundar Pinchai said in a recent earnings call that it was leading to a complete rethink about how the company was ‘doing everything,’ and head of search John Giannandrea said that it would change humanity.
Machine learning systems are going to be transformative, in everything from medical diagnoses to driving our cars. While machine learning won’t replace humans, it will change humanity.
Giannandrea said that Google Photos is a good illustration of the power of machine-learning.
“It’s actually understanding what’s in the picture.” He explains that through the learning process, the computer ‘knows’ what a border collie looks like, and it will find pictures of it when it’s a puppy, when its old, when it’s long-haired, and when it’s been shorn. A person could do that, of course. But no human could sort through a million examples and simultaneously identify ten thousand dog breeds. But a machine learning system can […] You’re seeing what some people call super human performance in these learned systems.”
Dean comments that if Google were creating its infrastructure from scratch today, much of it would be learned, not coded.
Google Brain co-founder Greg Corrado said that while many use the terms AI and machine-learning interchangeably, they are not the same thing.
Traditional AI methods of language understanding depended on embedding rules of language into a system, but in this project, as with all modern machine learning, the system was fed enough data to learn on its own, just as a child would. “I didn’t learn to talk from a linguist, I learned to talk from hearing other people talk,” says Corrado.
Corrado says that the approach requires a change in mindset for coders, from controlling everything directly to analyzing data, and even new hardware. The company created its own chip, the Tensor Processing Unit, for its machine-learning library TensorFlow.
This is a microprocessor chip optimized for the specific quirks of running machine language programs, similar to the way as Graphics Processing Units are designed with the single purpose of speeding the calculations that throw pixels on a display screen.
I highly recommend reading the entire piece.