Balancing benefits and risks will continue to be a challenge for federal agencies.
Russian President Vladimir Putin stated in 2017 that the country that establishes superiority in artificial intelligence will become the ruler of the world. More recently, Tesla CEO Elon Musk and renowned psychologist Steven Pinker engaged in some rather heated debates over AI use, risks and regulation, with Musk warning that while the benefits of the new tech are revolutionary, the exponential increase in hardware ability and software talent also make AI a potential source of incredible risk.
Given these disparate opinions, it’s important that federal technologists collaborate with industry leaders to determine the best way to introduce AI to their agencies, while mitigating risks and capitalizing on its intended use and benefits. It is also critical that industry re-evaluate the speed and manner with which they bring any new technology to market.
Why is AI risky?
While there are many ways AI is being used positively to improve or simplify life, AI is largely open source so the code is publicly available to anyone: ambitious startups, innovative developers and malicious actors. While many of the big tech companies and leaders in the burgeoning AI field make their code available to the public with the goal of stimulating innovation, many would argue more adversaries than legitimate users are leveraging the code for ill-intent, as evidenced by the foreign influence of the 2016 election.
AI effectively creates music and visual art, detects human emotion and targets advertising, but it is a powerful tool for synthetization, manipulation and ultimately creating an era of pliable reality. Lyrebird introduced a voice synthesizer capable of creating a digital voice that sounds identical to the speaker with only 60 seconds of audio. This form of AI has since been popularized by “Say it with Trump” from Clash.me, which allows users to create a fake audio clip of Trump saying whatever they choose to input. In addition, another type of AI makes it possible to create fake video, as evidenced by the synthesized video of former President Obama from a University of Washington study.
Although AI brings benefits when harnessed and used appropriately, it will also make it difficult to determine the difference between truth and fiction online. While the ethical use of AI is a topic of discussion in the U.S., it’s not even an afterthought for the purveyors of misinformation via AI abroad. From tampered loan applications and manipulated political outcomes to false advocacy movements, fake identities and impersonation, the implications of AI misuse extend to everyone. Most importantly, AI introduces the ability to influence and even generate warfare. The Chinese have already stated that they believe AI will change the battlefield, allowing them to move from “informatized” warfare to “intelligentized” warfare. And in this arms race, AI so far has proven far more effective at attacking than defending.
It’s About the Data
Ultimately, AI is all about data: collecting and analyzing it to make informed decisions, or providing instant information so humans can take timely, educated action. Although it has been around since the 1950s, it’s yet to be determined if AI can advance to the point that it can incorporate judgment, future impact and diagnosis for every decision it makes while also removing bias and appropriately managing uncertainty and preferences. Consequently, many would argue that federal regulation is needed to temper applications of AI, but just as many would say that it’s either too late to regulate or that regulation would stifle further innovation. Other regulatory federal agencies such as the Federal Aviation Administration and Food and Drug Administration have proven critical over the years, and while a similar agency may make sense for AI, the private sector is largely responsible for AI advancements and the data it generates and manages.
New wealth is quickly being created by that data—many of the same companies leading the charge behind AI also have the biggest collections of personal, monetizable data. As Thomas Piketty explains in “Capital in the Twenty-First Century,” the more data you have, the better the algorithms, and the better the algorithms, the better the quality of service—or, the greater the likelihood of surpassing your competitors.
As a result, we can expect the data boom to continue, but individual users will no longer offer up their personal data to Google, Facebook and Amazon for free. Instead, they may sell their data and stipulate how it can be used.
There isn’t a single use for AI nor is there a single way of managing it and using it in a manner that it can be universally accepted as safe. However, there are clear examples (e.g., cancer diagnosis) of how using AI and humans as a team can achieve higher success rates and bring more benefits than either performing a task individually. Computers were originally built to do things faster than humans and to make life easier and better for mankind. AI was built with the same intent, and just as humans eventually learned to trust their computer programs, so too must we accept the improvements AI offers. However, we must also realize that, just as with any technological advancement, there are ill-intentioned actors seeking to harness the power of AI and use it to the detriment of others.
Colby Proffitt is a senior technology strategist for NetCentrics Corporation.