Saturday, April 13, 2019

Project Maven: Ethics concerns from Google employees

Project Maven: Is AI drones ethical or not?

Link: https://gizmodo.com/google-plans-not-to-renew-its-contract-for-project-mave-1826488620

Summary:

In 2017, Google was awarded a defense contract for Project Maven. The entire purpose of Project Maven was combining machine learning and engineering for the US drone system to distinguish friend from foe. The ethics concerns that some Google employees have with this project is that it can scan and memorize the faces of individuals from a distance using footage from the internet. With the power and the information on the internet on a global scale and being able to determine whether or not this individual is a friend and foe can be great for military applications. However, the thought of killing individuals who have been determined to be a foe for military personnel is concerning these individuals. Because of this reason, more than 10 Google employees left with others threatening to leave the company. To prevent this, Google said they were not going to further pursue Project Maven when the contract ended in March 2019.


Jason’s POV:
The question of ethics is always something difficult. After being in the shoes of servicemembers in combat, I can say that being able to ID friendlies and foes in seconds before engaging can mean a lot. Not being able to quickly determine who is friendly and not can mean life and death. Being able to help the warfighters with accurate information I believe is ethical. If it means civilian and friendly casualties dropping will be the greatest thing for any military group. I always had the thought that isn’t saving innocent lives ethical? War is a constant issue that plagues the world today. It’s not going to disappear anytime soon as there is constant conflict around the world. But if you can reduce the chances of death of a civilian would you not try to attempt this? Which leads me to another point, if employees felt the need to be ethical why not try to use the position of Google to try enacting policies to ensure it doesn’t spill over to anything else if that is the worry? Rather than fixing the issue, they run away from the issue.

Alice’s POV:
We are in a technology phase where it's hard to differentiate what is ethical or not. Is having an artificial intelligence program incorporated in a drone that would help surveillance the area for danger, ethical or not ethical? I think the identification of friend or foe would be great for the military to have to protect the citizens, but I can understand why it’s a ethical concern as Google operates in multiple continents, they house employees from all over the world, so if it’s used in US military, I can see why some employees are concerned about it. Especially when it means that Google AI team are in a way apart of “killing” foes. But I feel like Google should have properly discussed the pro and cons, and really think through before signing the contract to be apart of Project Maven, as their pull out from it caused all the hard work, funding, and everything to be --in a way -- a waste of time.

Now the question for you is, what are your thoughts on Project Maven and how would you have done to deal with the ethical concerns that are rising as we are trying to come up with better ways to protect ourselves from outside threat.
             
Other links on Project Maven for reference:

No comments:

Post a Comment