At the company’s annual developers conference today, CEO Sundar Pichai announced a new computer processors designed to perform the kind of machine learning that has taken the industry by storm in recent years.
The announcements reflects how rapidly artificial intelligence is transforming Google’s own company itself, and it is the surest sign yet that the company plan to lead the development of every relevant aspect of softwares and hardwares.
Perhaps most importantly, for those working in machine learning at least, the new processors not only executes at blistering speed, it can also be trained incredibly efficiently. Called the Cloud Tensor Processing Units, the chip is named after Google’s open-source TensorFlow machine-learning framework’s.
Training is a fundamental part of machine learning. To create an algorithm capable of recognizing hot dogs in image, for example, you would feed in thousands of example set of hot-dog images—along with not-hot-dog example—until it learns to recognize the differences. But the calculations required to train a large models are so vastly complex that training might take days or weeks.
Pichai also announced the creations of machine-learning supercomputers, or Cloud TPU pods, based on clusters of Cloud TPUs wired together with high-speed data connection. And he said Google was creating the TensorFlow Research Cloud, consisting o lakhs of TPUs accessible over the Internet.
“We are building what we think of as AI-first data center,” Pichai said during his presentation. “Cloud TPUs are optimized for both training and inferences. This lays the foundation for significant progress [in AI].”
Google will make 1,000 Cloud TPU system available to artificial intelligence researchers willing to openly share details of their works.
Pichai also announced a number of AI research initiatives during his speechs. These include an effort to develop algorithms capable of learning how to do the time-consuming work involved with fine-tuning other machine-learning algorithm. And he said Google was developing AI tools for medical image analysis, genomic analysis, and molecule discovery.
A teraflop refers to a trillion “floating points” operations per second, a measure of computer performance obtained by crunching through mathematical calculation. By contrast, the iPhone 6 is capable of about 100 gigaflops, or one billion floating point operation per second.
Google says it will still be possible for researcher to design algorithms using other hardware, before porting it over to the TensorFlow Research Cloud. “This is what democratizing machine learning is all about—empowering developer by protecting freedom of design,” Li added.
A growing number of researcher base have adopted TensorFlow since Google released the software in 2015. Google now boasts that it is the most widely used deep-learning framework in the world.
Take your time to comment on this article.