Experts Question if Artificial Intelligence Will Protect Man-kinds’ Best Interest

hrs_IMHC

Are Robots Equipped With the Ability to Make Ethical Decisions?

Utilizing robots and intelligent reality software in the workplace has altered and enhanced the world as we know it, but to what end? The questions that arise in conversations when discussing robotics and intelligent reality are very similar, usually surrounding the ethics and decision-making capabilities of the intelligence. Robots are being utilized to oversee and optimize processes, but do they have the ability to decide what is best for man-kind before acting?

Over the past 5 years, the merging of artificial intelligence (AI) and the internet of things (IoT) has grown tremendously and is expected to continuously rise, eventually transforming the workforce as we know it. The impact of the introduction of thinking machines to the labor force has been major, especially in the manufacturing and healthcare industries.

Artificial intelligence has improved productivity, shortened supply chains, and increased automation in warehouses, contributing to the popularity of smart factories. The automobile industry has begun the production of smart cars, while the healthcare industry is utilizing robots to test diagnoses and revolutionize treatments, empowering innovation and promoting growth.

Industry experts predict 50% of all manufacturing companies will be using AI in some form by the end of 2021

Humans created machines to think, observe and resemble human behaviors, allowing the machines to learn an algorithm and gain the ability to respond to situations as if it was second nature to them. But are the machines morally equipped not to harm humans or able to understand their capacity as machines? Robots have modernized society and embody the power to enhance the world as we know it, but are they capable of correcting and eliminating harmful behavior or adjusting to societal standards?

Technology companies are in a race to the top competing for the title of “most innovative,” while some businesses have encountered backlash due to the entities that have access to the software. Microsoft Corporation faced widespread criticism as they bragged on working with Immigration and Customs Enforcement (ICE) to assist the agency in “processing data on edge devices or utilizing deep learning capabilities to accelerate facial recognition and identification.”

Microsoft Corporation faced widespread criticism as they bragged on working with Immigration and Customs Enforcement (ICE).

Microsofts’ CEO Satya Nadella addressed the concerns in an internal memo to all employees ensuring the company isn’t working with the U.S. government on any projects related to separating children from their families, but they are working with ICE to support its’ legacy mail, calendar, messaging and document management workloads.

Microsoft president Brad Smith requested for governments worldwide to develop and adopt laws to regulate facial-recognition technology. Concerns continue to rise globally as countries like China rapidly introduce facial recognition to monitor public spaces, with analysts estimating an increase of 100 million surveillance cameras, adding to the 200 million already in place.
Smith suggests pushing for the ban on using facial recognition in ways that harm political freedom or enable discrimination.

He also implanted principles that will guide Microsoft in how it develops and deploys facial recognition technology. The principles are:

  1. Fairness. We will work to develop and deploy facial recognition technology in a manner that strives to treat all people fairly.
  2. Transparency. We will document and clearly communicate the capabilities and limitations of facial recognition technology.
  3. Accountability. We will encourage and help our customers to deploy facial recognition technology in a manner that ensures an appropriate level of human control for uses that may affect people in considerable ways.
  4. Non-discrimination. We will prohibit in our terms of service the use of facial recognition technology to engage in unlawful discrimination.
  5. Notice and consent. We will encourage private sector customers to provide notice and secure consent for the deployment of facial recognition technology.
  6. Lawful surveillance. We will advocate for safeguards for people’s democratic freedoms in law enforcement surveillance scenarios and will not deploy facial recognition technology in scenarios that we believe will put these freedoms at risk.

Limiting a robots competencies may hinder the purpose of innovation, however creating a predictable, transparent thinking machine could be a solution to a question surrounding ethics. If the persons in control of the robot understand its’ limits, this may reduce and eliminate the concern of foul play or human harm.

The National MBE Manufacturer Summit 2019

Join the Georgia MBDA at the 4th Annual National MBE Manufacturers Summit 2019 to experience the trending technology and innovation! Our conference participants will have the ability to fully engage and lay their eyes on technological innovations ranging from automation drones and robotics to augmented and virtual reality. This is your chance to be among the first to interact with cutting-edge technologies and expose your team to fresh ideas.