Home Latest Cyber Security News | Network Security Hacking Google Will Not Allow Use Of Their AI In Developing Weapons

Google Will Not Allow Use Of Their AI In Developing Weapons

by Harikrishna Mekala

Google announced on Thursday that it will not allow its AI software to be used in the development of weapons, which sets new standards for the business decisions it makes in this contentious area. Google management is trying to diffuse the situation regarding the companies employees and the government work it is involved in due to the significant level of protest it received from its staff. The US military was trying to work on identifying the objects in the video of drones which is what prompted the outcry from employees and the company has now advised that their focus will be to target government contracts in the fields of cybersecurity and military recruitment.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” said by the CEO of the company Sundar Pichai.

The increase in the performance of computers and the new processor architecture has made AI a reality in the last couple of years. Google is one of the biggest sellers of AI-powered tools which will help the computers to review large data sets and learn from them faster than humans could do.

An anonymous employee from Google said that had the new principles been in place at the time the drone project would not have been taken on. Google is planning to honour its commitment to the project until next March however a petition has been signed by more than 4600 Google employees to stop the work sooner. Microsoft and other companies had published their general guidelines for AI guidelines before Google but theirs have received more attention due to their current involvement with the drone project.

“The clear statement that they won’t facilitate violence or totalitarian surveillance is meaningful,” University of Washington technology law professor Ryan Calo tweeted on Thursday.

The company has also recommended that due to concerns over current security systems, developers should avoid launching AI programs that are capable of ‘significant damage’, presumably until those systems catch up with the rapidly advancing AI technology.

Take your time to comment on this article.

You may also like