The Organisation for Economic Co-operation and Development released its global standards, which aim to ensure AI is designed to be robust, safe, fair and trustworthy.
The Organisation for Economic Co-operation and Development unveiled the first intergovernmental standard for artificial intelligence policies Wednesday—and the organization’s 36 member countries including America have initially signed on along with Argentina, Brazil, Colombia, Costa Rica, Peru and Romania.
OECD, an international forum that unites stakeholders from many nations to work together to address challenges of globalization, released “Recommendations of the Council on Artificial Intelligence” to help foster a global policy ecosystem that leverages the evolving technology’s benefits, while also protecting human rights and democratic values.
OECD’s Director of the Science, Technology and Innovation Directorate Andrew Wyckoff told reporters that the principles’ creators hope they’ll help shape a stable regulatory environment that promotes the tech’s positive uses, while withstanding unethical abuses.
“AI is what we would call a ‘general purpose technology.’ It’s going to change the way we do things in nearly every single sector of the economy—that’s part of the reason we give so much importance to its development,” he said. “Some have termed it as ‘the invention of a method of inventions’ and in fact we can see it already affecting the process of scientific discovery and science itself.”
OECD’s recommendations are divided into two sections. The first section encompasses “value-based principles for the responsible stewardship of trustworthy AI.” Actors who use the tech are called on to promote five principles:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and implement appropriate safeguards—for example, enabling human intervention where necessary—to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
- AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
- Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
In the second section, OECD offered recommendations for national policies and international cooperation around trustworthy AI. The organization suggests that governments invest in research and development focused on the tech, foster a digital ecosystem for AI, shape an enabling policy environment for AI, build human capacity and increase preparation for labor market transformation caused by the technology and cooperate with other nations.
To “scope” the set of principles, OECD established a 50+ member expert group on AI, composed of representatives from 20 governments and leaders from industry, academia and science communities. The multi-disciplinary group of stakeholders advised the organization on AI best practices during four meetings between September 2018 and February 2019.
American representatives from the State and Commerce departments and the National Science Foundation participated in the development. In April, Assistant Director for AI in the Office of Science and Technology Policy Lynne Parker also briefly mentioned that the administration was in discussion with OECD around the impending recommendations. In a statement, the White House also confirmed that the United States “worked closely with OECD countries” throughout the development. Michael Kratsios, deputy assistant to the president for technology policy, represented America at the OECD Forum and Ministerial Council Meeting 2019 in Paris, where the organization released the principles.
There, Kratsios announced U.S. support for the recommendations.
The principles are said to be more than suggestions—they represent a “very strong, yet non-binding political commitment.” OECD will also continue to monitor how nations implement and uphold them going forward.
And in terms of AI policies, lawmakers Tuesday introduced bipartisan legislation that would establish a national artificial intelligence strategy and invest billions in the tech over the next five years.