Trump Administration Releases Draft Framework for the Ethical Use of Data


The draft Data Ethics Framework offers seven tenets for agencies to follow, complete with legal authorities, use cases and links to additional resources.

A team of federal data experts released a draft Data Ethics Framework with seven core tenets their colleagues need to keep in mind as agencies increase their use of data for decision making.

Data—especially at the scale and granularity collected by the federal government—is a powerful tool. But democratic governments that fail to use data ethically run the risk of losing the public’s trust and, in turn, their willingness to give their personal data over to agencies.

As part of the 20-point action plan to kick off implementation of the Federal Data Strategy in 2020, the General Services Administration was charged with creating a Data Ethics Framework “to help agency employees, managers and leaders make ethical decisions as they acquire, manage and use data.”

“Decisions made with data touch every aspect of American life,” the framework notes, particularly when the data is collected by federal agencies and the decisions being made are on behalf of the entire country. The framework looks to guide federal officials’ decision making on the use of data “with the goal of protecting civil liberties, minimizing risks to individuals and society, and maximizing the public good.”

GSA brought together a group of 14 federal officials from across government “with expertise in statistics, public policy, evidence-based decision making, privacy and analytics” to develop on the framework, with added insights from the Chief Data Officer Council, the Interagency Committee on Standards Policy and the Federal Privacy Council.

“Instead of looking at issues from a single perspective, ethical decision making is best achieved by taking a holistic approach and widening the context to weigh the greater implications of data use,” the framework states.

The document includes an “about” section that “outlines the intended purpose and audience of this document,” namely agency and program leaders, including CDOs; data practitioners like statisticians, data analysts, database professionals and data scientists; employees whose job it is to collect or report data; policymakers; public relations and other communications officials; and data consumers, “such as other agencies, communities or the public.” It also includes a section defining “data ethics” and other necessary background terms.

But the meat of the framework is seven tenets, “or high-level principles,” for the ethical use of data by government agencies and a set of illustrative use cases showing how data can be used ethically to drive outcomes:

  • Be aware of and uphold applicable statutes, regulations, professional practices and ethical standards.
  • Be honest and act with integrity.
  • Be accountable and hold others accountable.
  • Be transparent.
  • Be informed of developments in the field of data science, including with data systems, techniques and technologies.
  • Be respectful of privacy and confidentiality.
  • Be respectful of the public, individuals and communities.

Each tenet comes with a set of recommendations for federal leaders and employees, citations on the legal authorities establishing the government’s abilities to use and disseminate certain types of data, and additional resources for those who need to dig deeper into a concept.

“The Data Ethics Tenets apply to all data types and data uses,” the document states. “It is understood that the same dataset may be used at different times for different purposes. No matter the data type or use, federal employees should ensure the protection of privacy—i.e., state of being free from unwarranted intrusion into the private life of individuals—confidentiality—i.e., free from inappropriate access and use—civil rights and civil liberties during data activities.”

One way to ensure agencies remain within the guardrails established by the tenets is to split data uses into two buckets: uses that affect the rights, privileges and benefits of an individual; and uses that do not affect an individual’s rights, as the data are used only to create aggregated results.

The set of use cases at the end of the document outline these differences in real-world scenarios, such as inherent bias in artificial intelligence tools and finding the right balance of data disclosure to encourage good outcomes without causing harm.

The document is currently in draft form, with a deadline to have it completed and published on before the end of 2020.