Google promises not to apply AI to weapons

Social humanoid robot named Sophia is seen during the Innovation Week 2018 in Prague, Czech Republic, on May 14, 2018. Photo/Vit Simanek (CTK via AP Images)

By Anne Huang

Artificial Intelligence has been a very controversial topic, with many people praising the opportunities it can open for the human race, while others exhort it for the dangers and risks it possesses.

Contrary to popular beliefs, the “idea” of Artificial Intelligence and even the creation of it has been around for at least 67 years. The actual first working AI ever created was the Ferranti Mark 1 machine of the University of Manchester in 1951.

But as technology evolves faster than ever, the Artificial Intelligence shown in the movies starts to become more and more plausible each and every day. One of the major contributors in this area is of course, Google.

Google has been of one the big-name companies not withholding any resources necessary to unlock the secret of a completed and perfected AI, even having an entire division (Google AI) created so they can store and collect all the hard works as well as their achievements in this area.

In this recent conference hosted in Mountain View, California, the CEO Sundar Pichai addressed the fear with AI by saying:

“We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.”

FILE- In this May 8, 2018, file photo, Google CEO Sundar Pichai speaks at the Google I/O conference in Mountain View, Calif. Google pledges that it will not use artificial intelligence in applications related to weapons or surveillance, part of a new set of principles designed to govern how it uses AI. Those principles, released by Pichai, commit Google to building AI applications that are “socially beneficial,” that avoid creating or reinforcing bias and that are accountable to people. (AP Photo/Jeff Chiu, File)

Here is the summarization of the “policies” Pichai mentioned:

  • Be socially beneficial.
  • Avoid creating or reinforcing unfair bias.
  • Be built and tested for safety.
  • Be accountable to people.
  • Incorporate privacy design principles.
  • Uphold high standards of scientific excellence.

Be socially beneficial

Google wishes to expand technology as large as possible, so that the advances and advantages AI have can impact a larger field, including “healthcare, security, energy, transportation, manufacturing, and entertainment”. Google also believes in using AI as a tool to bring in better quality as well as more accurate information for share. They also believe that AI could be better at respecting “cultural, social, and legal norms” in each of the different operating areas.

Avoid creating or reinforcing unfair bias

The previous point also connects to this idea, because since AI is not human, it does not possess the bias human minds usually have. This can ensure that Googles contents would not negatively impact anyone from any sort of accidental bias.

Be built and tested for safety

Google would “continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm”. They will design AI to be cautious, and through that become a better look-out for threats and possible harm to the users.

FILE- In this May 8, 2018, file photo, Google CEO Sundar Pichai speaks at the Google I/O conference in Mountain View, Calif. Google pledges that it will not use artificial intelligence in applications related to weapons or surveillance, part of a new set of principles designed to govern how it uses AI. Those principles, released by Pichai, commit Google to building AI applications that are “socially beneficial,” that avoid creating or reinforcing bias and that are accountable to people. (AP Photo/Jeff Chiu, File)

Be accountable to people

Basically, this policy is Google guaranteeing that not only would AI be controlled by people, they would “appropriate human direction and control”.

Incorporate privacy design principles

They will incorporate the current Google privacy policies into the AI. This can ensure that these AI would respect the boundaries of human privacy and allow the users to use the AI fear-free of any sort of leakage in information.

Uphold high standards of scientific excellence

In this policy, it was said that they would make sure to appoint “a range of stakeholders to promote thoughtful leadership in this area” They ultimately want AI to be created in a positive environment rooted with integrity.

Google also promised to “responsibly” use AI and share its abilities, ensuring that it would be used to share knowledge, education and useful purposes, and more.

Be made available for uses that accord with these principles

Limiting “harmful or abusive applications”, they work to make sure that their AI would follow the following factors:

  • Primary purpose of “use of a technology and application”
  • To make sure that their technology would be generally available and unique
  • To ensure that this technology would have an impact by ensuring that it reaches a large enough scale and accessibility for everyone
  • Ultimately, Google’s core invention is to provide a “general-purpose tools, integrating tools for customers, or developing custom solutions”

Pichai summarized that although they respect the vast variety of opinions that can approach this matter, these policies are ultimately how Google is going to handle AI. The wish as technologies grow, along with that they will “continue to share what we’ve learned to improve AI technologies and practices”.