Artificial Intelligence Systems Will Need to Have Certification, CISA Official Says

vs148/Shutterstock

A process for vetting algorithms and their input data is needed to build confidence in the tech but is still very far off.

Vendors of artificial intelligence technology should not be shielded by intellectual property claims and will have to disclose elements of their designs and be able to explain how their offering works in order to establish accountability, according to a leading official from the Cybersecurity and Infrastructure Security Agency.

“I don’t know how you can have a black-box algorithm that’s proprietary and then be able to deploy it and be able to go off and explain what’s going on,” said Martin Stanley, a senior technical advisor who leads the development of CISA’s artificial intelligence strategy. “I think those things are going to have to be made available through some kind of scrutiny and certification around them so that those integrating them into other systems are going to be able to account for what’s happening.”

Stanley was among the speakers on a recent Nextgov and Defense One panel where government officials, including a member of the National Security Commission on Artificial Intelligence, shared some of the ways they are trying to balance reaping the benefits of artificial intelligence with risks the technology poses. 

Experts often discuss the rewards of programming machines to do tasks humans would otherwise have to labor on—for both offensive and defensive cybersecurity maneuvers—but the algorithms behind such systems and the data used to train them into taking such actions are also vulnerable to attack. And the question of accountability applies to users and developers of the technology.

Artificial intelligence systems are code that humans write, but they exercise their abilities and become stronger and more efficient using data that is fed to them. If the data is manipulated, or “poisoned,” the outcomes can be disastrous. 

Changes to the data could be things that humans wouldn’t necessarily recognize, but that computers do.

“We’ve seen ... trivial alterations that can throw off some of those results, just by changing a few pixels in an image in a way that a person might not even be able to tell,” said Josephine Wolff, a Tufts University cybersecurity professor who was also on the panel. 

And while it’s true that behind every AI algorithm is a human coder, the designs are becoming so complex, that “you’re looking at automated decision-making where the people who have designed the system are not actually fully in control of what the decisions will be,” Wolff says.   

This makes for a threat vector where vulnerabilities are harder to detect until it’s too late.

“With AI, there’s much more potential for vulnerabilities to stay covert than with other threat vectors,” Wolff said. “As models become increasingly complex it can take longer to realize that something is wrong before there’s a dramatic outcome.”

For this reason, Stanley said an overarching factor CISA uses to help determine what use cases AI gets applied to within the agency, is to assess the extent to which they offer high benefits and low regrets. 

“We pick ones that are understandable and have low complexity,” he said.

Among other things federal personnel need to be mindful of is who has access to the training data. 

“You can imagine you get an award done, and everyone knows how hard that is from the beginning, and then the first thing that the vendor says is ‘OK, send us all your data,’ how’s that going to work so we can train the algorithm?” he said. “Those are the kinds of concerns that we have to be able to address.” 

“We’re going to have to continuously demonstrate that we are using the data for the purpose that it was intended,” he said, adding, “There’s some basic science that speaks to how you interact with algorithms and what kind of access you can have to the training data. Those kinds of things really need to be understood by the people who are deploying them.”

A crucial but very difficult element to establish is liability. Wolff said ideally, liability would be connected to a potential certification program where an entity audits artificial intelligence systems for factors like transparency and explainability. 

That’s important, she said, for answering “the question of how can we incentivize companies developing these algorithms to feel really heavily the weight of getting them right and be sure to do their own due diligence knowing that there are serious penalties for failing to secure them effectively.”

But this is hard, even in the world of software development more broadly. 

“Making the connection is still very unresolved. We’re still in the very early stages of determining what would a certification process look like, who would be in charge of issuing it, what kind of legal protection or immunity might you get if you went through it,” she said. “Software developers and companies have been working for a very long time, especially in the U.S., under the assumption that they can’t be held legally liable for vulnerabilities in their code, and when we start talking about liability in the machine learning and AI context, we have to recognize that that’s part of what we’re grappling with, an industry that for a very long time has had very strong protections from any liability.”

View from the Commission 

Responding to this, Katharina McFarland, a member of the National Security Commission on Artificial Intelligence, referenced the Pentagon’s Cybersecurity Maturity Model Certification program.

The point of the CMMC is to establish liability for Defense contractors, Defense Acquisition’s Chief Information Security Officer Katie Arrington has said. But McFarland highlighted difficulties facing CMMC that program officials themselves have acknowledged.

“I’m sure you’ve heard of the [CMMC], there’s a lot of thought going on, the question is the policing of it,” she said. “When you consider the proliferation of the code that’s out there, and the global nature of it, you really will have a challenge trying to take a full thread and to pull it through a knothole to try to figure out where that responsibility is. Our borders are very porous and machines that we buy from another nation may not be built with the same biases that we have.”

McFarland, a former head of Defense acquisitions, stressed that AI is more often than not viewed with fear and said she wanted to see more of a balance in procurement considerations for the technology.

“I found that we had a perverse incentive built into our system and that was that we took, sometimes, I think extraordinary measures to try to creep into the one percent area for failure,” she said, “In other words, we would want to 110% test a system and in doing so, we might miss the venue of where its applicability in a theater to protect soldiers, sailors, airmen and Marines is needed.” 

She highlighted upfront a need for testing a verification but said it shouldn’t be done at the expense of adoption. To that end, she asks that industry help by sharing the testing tools they use.

“I would encourage industry to think about this from the standpoint of what tools would we need—because they’re using them—in the department, in the federal space, in the community, to give us transparency and verification,” she said, “so that we have a high confidence in the utility, in the data that we’re using and the AI algorithms that we’re building.”

X
This website uses cookies to enhance user experience and to analyze performance and traffic on our website. We also share information about your use of our site with our social media, advertising and analytics partners. Learn More / Do Not Sell My Personal Information
Accept Cookies
X
Cookie Preferences Cookie List

Do Not Sell My Personal Information

When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the First and Third Party Cookies used please follow this link.

Allow All Cookies

Manage Consent Preferences

Strictly Necessary Cookies - Always Active

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Sale of Personal Data, Targeting & Social Media Cookies

Under the California Consumer Privacy Act, you have the right to opt-out of the sale of your personal information to third parties. These cookies collect information for analytics and to personalize your experience with targeted ads. You may exercise your right to opt out of the sale of personal information by using this toggle switch. If you opt out we will not be able to offer you personalised ads and will not hand over your personal information to any third parties. Additionally, you may contact our legal department for further clarification about your rights as a California consumer by using this Exercise My Rights link

If you have enabled privacy controls on your browser (such as a plugin), we have to take that as a valid request to opt-out. Therefore we would not be able to track your activity through the web. This may affect our ability to personalize ads according to your preferences.

Targeting cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.

Social media cookies are set by a range of social media services that we have added to the site to enable you to share our content with your friends and networks. They are capable of tracking your browser across other sites and building up a profile of your interests. This may impact the content and messages you see on other websites you visit. If you do not allow these cookies you may not be able to use or see these sharing tools.

If you want to opt out of all of our lead reports and lists, please submit a privacy request at our Do Not Sell page.

Save Settings
Cookie Preferences Cookie List

Cookie List

A cookie is a small piece of data (text file) that a website – when visited by a user – asks your browser to store on your device in order to remember information about you, such as your language preference or login information. Those cookies are set by us and called first-party cookies. We also use third-party cookies – which are cookies from a domain different than the domain of the website you are visiting – for our advertising and marketing efforts. More specifically, we use cookies and other tracking technologies for the following purposes:

Strictly Necessary Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Functional Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Performance Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Sale of Personal Data

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Social Media Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Targeting Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.