Home News Google’s AI codes better than its Engineers

Google’s AI codes better than its Engineers

by Harikrishna Mekala

“Handcrafted by machine learning scientists and actually only a few thousands of experts around the world can do this,” said Google CEO Sundar Pichai last week. Pichai quickly touched on the AutoML presentation at a launch event for the new Pixel 2 smartphones and other devices. “We want to enable hundreds of thousands of developers to be ready to do it.”

To get a range of how ‘smart’ AutoML is, note that Google openly talks about it being more effective than its team of 1,300 people tasked with building AutoML. Granted, not everyone listed on Google’s analysis page specializes in AI, but it does add some of the smartest software engineers in the corporation. Alphabet, Google’s parent company, operates over 27,000 people in Research and Development.

Some of the program’s benefits have made headlines. In appreciation to mastering its own code, AutoML broke a record by classifying images by content. It scored a precision of 82 percent. AutoML also beat out a human-built system in identifying the location of multiple objects in an image field. Those methods could be integral to the future of virtual reality and augmented reality.

However, nothing else is really understood about AutoML. Unlike Alphabet’s DeepMind AI, AutoML gives have a lot of data available about it other than brief descriptions from Pichai and other researchers. Google’s study team did dedicate a blog post on its website earlier this year. It explained the intricacies of the AutoML system:

“In our program which we call “AutoML”, a controller neural net can offer a “child” model architecture, which can then be prepared and evaluated for quality on a particular task. That feedback is then used to notify the controller how to develop its proposals for the next round,” the researchers wrote. “We repeat this method thousands of times — generating new designs, testing them, and giving that feedback to the controller to learn from. Eventually, the controller learns to assign high possibility to areas of architecture space that produce better accuracy on a held-out validation dataset, and a low possibility to areas of architecture space that score poorly.”

Take your time to comment on this article.

You may also like

Latest Hacking News