The Food and Drug Administration is seeking public feedback amid its efforts to balance the advantages and risks of deploying artificial intelligence and machine learning into drug creation and approval processes.
Artificial intelligence systems could facilitate the development of new drugs and similar medical treatments, and the Food and Drug Administration is looking for feedback on how to maximize its benefits and minimize its risks.
Outlined in a notice of availability and an accompanying discussion paper, FDA officials introduced the idea of incorporating AI technologies into drug manufacturing and approval operations. Several AI capabilities—such as using the technology to identify biological targets, synthesize large volumes of data and cross-reference existing scientific literature—are mentioned as potential use cases.
“This evolving ecosystem presents unique opportunities and challenges, and FDA is committed to working across its medical product centers with partners domestically and internationally to ensure that the full potential of these innovations is realized for the benefit of the public,” the paper reads.
Concerns and risks surrounding the usage of artificial intelligence—like algorithmic biases, particularly when referencing or learning from older data sources—were also profiled in the series of FDA documents published earlier this week..
The paper also noted that information on algorithmic processing could be limited due to proprietary reasons.
“FDA is also aware of the potential concerns and risks with emerging innovations such as AI/ML and will share initial considerations and solicit feedback on how to help ensure the responsible utilization of AI/ML in drug development,” the paper stated. “It is important to assess whether the use of AI/ML introduces specific risks and harms.”
Questions posed by the FDA and intended for public feedback are based on these points of concern.
Within the FDA, the agency’s Center for Drug Evaluation and Research, Center for Biologics Evaluation and Research, and Center for Devices and Radiological Health are collaborating to spearhead the investigation into incorporating AI with agency operations.
Inquiries into AI and ML technologies in the drug development space mark yet another industry that stands to change from the incorporation of these emerging technologies. Deploying responsible and ethical AI systems has been a priority within the Biden administration’s posture on critical and emerging tech regulation, with multiple executive orders and national frameworks released in recent months as a means to install guardrails in the broadly unregulated AI field.
Leadership at the FDA notes that their inquiries into AI tech deployment intend to parlay into the ongoing effort to develop responsible and safe AI systems.
“Our agency’s efforts in AI/ML extend beyond these initiatives,” said Patrizia Cavazzoni, the director of the CDER. “We consult product developers, engage patients, and promote regulatory science in this area, among other activities. As a public health regulatory agency, we hope to encourage the safe development of these technologies that are poised to help Americans gain quicker and more reliable access to important treatments.”