It’s time to talk about user responsibility and generative AI

Laurence Dutton / GETTY IMAGES

COMMENTARY: While the conversation around artificial intelligence has mostly centered on the obligation of software developers to create ethical guardrails for the technology, it's time to focus on the accountability of the users.

It’s hard to overstate the hype surrounding ChatGPT, but something seems to be missing from the public conversation: There’s an urgent need to discuss what it means to be a responsible user of generative AI.

If and when generative AI tools are successfully prompted to spread misinformation, engage in hate speech or generate malware, who is responsible?

Many efforts are underway to establish ethical standards and legal frameworks that define the responsibilities of AI developers. But what about the responsibilities of humans interacting with these tools?

AI developers are working rapidly to install guardrails (ChatGPT says it’s been programmed not to promote illegal activity or self-harm, for example), but users have consistently found ways to jailbreak these boundaries using creative prompts. As long as systems like ChatGPT are accessible to all, it may be impossible to prevent irresponsible use. This is one reason it may be wise to limit access to minors.

More broadly, however, generative AI governance can only be effective if we hold ourselves at least partially accountable. Yes, responsibility for unethical conversations may fall on the AI tool and its developers. Developers should use techniques like guardrails introduced through prompt engineering to help mitigate risks from generative AI. But developers and tools can only do so much, and users need to both avoid intentional misuse and remain actively engaged in governing and controlling this technology.

Consider the current generation of self-driving cars. Even with autopilot engaged, it would be irresponsible to read a book while in the driver’s seat. User standards and our own well-being compel us to pay attention to the road. If the technology begins to falter, we’re supposed to grab the wheel and drive.

Similarly, when interacting with generative AI, we should avoid the temptation to turn off our brains and simply accept what it tells us. We often believe that as long as there’s human governance, we can accept a higher level of risk using AI. In doing so, we’re taking for granted that humans in or on the loop will remain actively engaged. The truth is that if we’re asleep at the wheel, we’re inviting disaster.

There’s also a misconception that AI developers can avoid these pitfalls altogether with iterative system improvements, but this is an unhealthy mindset. It’s much better to acknowledge that there’s only so much trust we should place in AI. Human users need to understand AI’s limitations and apply their remarkable human capacity for situational understanding and moral judgment. In fact, one of the most important challenges for AI developers is to design interfaces that help users remain actively engaged rather than becoming passive observers who struggle to remain attentive.

Our interactions with generative AI, and indeed AI more broadly, are best thought of as a partnership. In successful partnerships, both partners actively share responsibility. If humans are not paying attention to their half of this partnership, we should expect avoidable and potentially dangerous mistakes.

Some existing ethical frameworks do recognize the important role humans must play in controlling AI. For example, the Department of Defense’s Ethical Principles for AI state that “DOD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment and use of AI capabilities.” However, I believe much work remains to define user responsibilities as generative AI becomes more widespread.

To help advance this important conversation, we’re working with Catholic University and others to organize a conference on the ethics of generative AI that will explore not only the responsibilities of those who build these algorithms but also those who use them. 

And because we acknowledge our corporate responsibility to prevent these tools from behaving in a manner inconsistent with our values, Leidos uses risk-based governance for our generative AI projects and is updating our employee code of conduct to address the responsible use of generative AI in the workplace.

Of course, no discussion on AI these days can be complete without hearing what ChatGPT itself has to say, so I asked it to describe the most important ethical consideration I should remember as a user. I’ve reviewed its response carefully before including it below:

“The most important ethical consideration to keep in mind when interacting with me, as an AI language model, is to recognize that I am a machine learning system and not a human being. While I can generate responses and provide information, I lack genuine thoughts, feelings or consciousness. Understand that I generate responses based on patterns in the data I have been trained on. Be mindful that the information I provide may not always be accurate, up-to-date or unbiased. Use me responsibly and avoid engaging in any harmful, malicious or inappropriate activities.”

I agree, and I call on AI developers and users alike to actively engage in promoting the responsible use of this powerful technology. 

Ron Keesing is senior vice president of technology integration at Leidos.

NEXT STORY: ChatGPT and other language AIs are nothing without humans