The Field of AI Research Is About to Get Way Bigger Than Code

ktsdesign/Shutterstock.com

When it comes to developing artificial intelligence, the largest technology companies in the world are all-in.

When it comes to developing artificial intelligence, the largest technology companies in the world are all-in. Google and Microsoft say they’re “AI-first,” and businesses like Facebook and Amazon wouldn’t be possible without the scalable personalization that AI allows.

But if you look for research on how artificial intelligence affects society—like how algorithms used in criminal justice can discriminate against people of color, or whether data used to train AI contains implicit bias against women and minorities—there’s almost no academic or corporate research to be found.

Kate Crawford, principal researcher at Microsoft Research, and Meredith Whittaker, founder of Open Research at Google, want to change that. They announced today the AI Now Institute, a research organization to explore how AI is affecting society at large. AI Now will be cross-disciplinary, bridging the gap between data scientists, lawyers, sociologists, and economists studying the implementation of artificial intelligence.

“The amount of money and industrial energy that has been put into accelerating AI code has meant that there hasn’t been as much energy put into thinking about social, economic, ethical frameworks for these systems,” Crawford tells Quartz. “We think there’s a very urgent need for this to happen faster.”

AI Now released a report last month that outlined many of the issues the institute’s researchers will explore more fully. Initially, the founders plan to hire somewhat fewer than 100 researchers.

The organization’s advisory board includes California supreme court justice Mariano-Florentino Cuéllar, NAACP Legal Defense Fund president Sherrilyn Ifill, and former White House CTO Nicole Wong. Other board members are Cynthia Dwork, the creator of differential privacy—an idea that has become a standard for protecting individuals’ data in a large database, and Mustafa Suleyman, cofounder of DeepMind.

The institute will be based at New York University, where many academics studied the artificial neural networks responsible for today’s AI boom. AI Now is partnered with eight NYU schools, including the NYU School of Law and the Steinhardt School of Culture, Education, and Human Development.

AI Now will focus on four major themes:

  1. Bias and inclusion (how can bad data disadvantage people)
  2. Labor and automation (who doesn’t get hired when AI chooses)
  3. Rights and liberties (how does government use of AI impact the way it interacts with citizens)
  4. Safety and critical infrastructure (how can we make sure healthcare decisions are made safely and without bias)

Crawford and Whittaker have worked for years on such issues within Google and Microsoft. One barrier to creating solutions to AI’s societal issues is the lack of a shared language between those who build AI and those studying its implications and affects.

“Part of what we’re doing is talking to the people who build the systems about the real practices and processes around this. Where are the assumptions?” says Whittaker. “And what don’t you know, that you would want to know, if you were going to do this in a way that you felt was responsible?”