It’s no longer left to individual programmers to decide on their own if something is designed ethically.
The newest job at Google: checking its AI to make sure it’s ethical.
The company has added a “formal review structure,” which consists of three groups to make big picture and technical decisions around the use of AI. That’s according to a blog post the company published Tuesday.
Google instated a new ethics policy earlier his year as a response to the worker movement opposing its controversial Project Maven contract with the Homeland Security Department. This new framework is where that policy actually gets implemented, so that it’s no longer left to individual programmers or product groups to decide on their own if something is designed ethically.
Here’s what each team will be responsible for, according to the blog post:
- A responsible innovation team that handles day-to-day operations and initial assessments. This group includes user researchers, social scientists, ethicists, human rights specialists, policy and privacy advisors, and legal experts on both a full- and part-time basis, which allows for diversity and inclusion of perspectives and disciplines.
- A group of senior experts from a range of disciplines across Alphabet who provide technological, functional, and application expertise.
- A council of senior executives to handle the most complex and difficult issues, including decisions that affect multiple products and technologies.
The Google blog post says that this framework has already made more than 100 assessments of deals and products, like the company’s temporary hold on releasing facial recognition technology. Going forward, Google will also be adding an external advisory group with interdisciplinary experts, a technique which has been heralded by critics as a way companies and governments can avoid unethical AI.