Scientists Weigh in on the Ethics of Next-Generation AI

Andriy Onufriyenko/Getty Images

The release of a powerful and publicly available AI has raised questions about the technology's potential and points of concern.

Artificial intelligence is a science that seems to have reached critical mass, with lots of new announcements and advancements hitting the news on a regular basis. Perhaps the biggest AI story over the past year was the release of the ChatGPT AI, which promises to revolutionize not only how AI is trained and operates, but also how this incredibly powerful science can be made available to anyone by simply asking it questions in plain language. I reviewed ChatGPT when it debuted, and found that it not only lived up to the hype, but exceeded my highest expectations, doing everything I asked of it, from programming in C++ to creating a cute bedtime story.

I also had some fun over the holidays with the image generation component of the AI—a program called DALL-E—and directed it to generate both cute and powerful images using nothing but my words and imagination. Both ChatGPT and DALL-E are free to experiment with, so give them a try if you have not yet done so. 

And while ChatGPT is the first widely used and publicly available AI with a natural language processing interface—the company got millions of users the first week it was released—there are sure to be many others in the near future. But as amazing as this new flavor of AI technology is, it also brings up questions of ethics—not just for ChatGPT but all future projects in this area. In a previous Nextgov column, I talked with the Founder and CEO of Credo AI, Navrina Singh, about the importance of ethical AI and some of the dangers that the new technology could foster. This was several weeks before ChatGPT was released, so her warnings did not specifically take that into account.

Now that ChatGPT has shown what is possible, even for regular people who are not data scientists or AI specialists, other experts in the AI field are weighing in as well. While everyone I talked with was genuinely excited and impressed with the new technology, there were also some concerns. 

This week I talked with two AI experts on the topic. The first was Sagar Shah, a client partner with Fractal.ai, a global AI and advanced analytics company. And the second was Moses Guttmann, the CEO and co-founder of ClearML, a machine learning operations—MLOps—platform being used by some of the largest companies in the world.

Nextgov: ChatGPT has really shown us what is possible with AI technology in a way that anyone can experience. But beyond just that platform, is the science of AI also rapidly advancing in terms of complexity and capabilities?

Guttmann: I think that in the past five years we have seen an immense growth of AI, from academic proof of concepts to purposely built products reaching general audiences. With the increase in efforts of democratizing AI, we are seeing more companies adopt machine learning—ML—as part of their internal research and development efforts. I believe that will continue.

Shah: The field is rapidly changing and I think we’ve seen the fruits of that labor in the last few years. Continued progress in the development of more advanced and powerful machine learning algorithms. Natural language processing—NLP—and the data used to inform machine learning models have helped push the technology forward. I also believe that the growth of MLOps has played a major role in developing more sophisticated AI by creating a more efficient loop for experimentation, iteration and deployment of machine learning models, ultimately resulting in greater scalability and maintenance of more complex AI systems. Recently, deep reinforcement learning, quantum computing, generative AI, and neuroscience at Fractal research arms are generating good insights of what’s coming next in the world of AI.

Nextgov: The release of ChatGPT has really set the world on fire. What makes ChatGPT so special compared with everything that came before it?

Shah: The caliber of its NLP technology is one thing, and the team behind that, as it was designed to create text that’s difficult to distinguish from human-written text. I think a major factor that makes ChatGPT stand out, however, is the human element associated with its training methodology. Reinforcement learning through conversations and responses sourced from people, with a reward model algorithm, plays a critical role in its ability to generate the natural-sounding responses we see now, as well as learn from its mistakes and become better at engaging in conversations over time.

Guttmann: The main leap ChatGPT presents is, in essence, the ability to curate knowledge in an extractable way. Specifically, you can think of generative NLP models as entities with ability to generate content that has roots in a knowledge base. This connection is what makes them create coherent content. Well, sometimes coherent content. 

With the additional conversational capabilities, this knowledge can now be extracted via a simple question or a set of questions. Specifically, when you ask ChatGPT a question, you interact with the model's understanding of the data it was trained on. This is exactly the interface we have as human beings to access knowledge from one another. That makes ChatGPT a true leap in creating models that really learn from data, because the definition of learning is creating patterns and rules, but also to have the ability to communicate them. This is truly amazing.

Nextgov: What are the dangers of having such powerful technology given to the public?

Guttmann: Well as they say, never trust anything you find on the internet. I guess now more than ever. The main difference with ChatGPT is scale. If someone wants to create enough content with a specific flavor to it, they can now automate that process and shift the weight. And that is truly alarming.

For the sake of completeness, someone could have trained an AI model for that specific task and then used that model for the rephrasing, but this process is definitely not accessible for the general public, and not even for software engineers.

Shah: The big danger is having misinformation presented as fact. Although ChatGPT’s AI can be used to create college essays or write applications, the sometimes technical nature of these tasks require human fact-checking to ensure the output is even usable. After a long enough prompt, think 500 words or more, ChatGPT’s cadence begins to become repetitive in sentence structure. And the knowledge base that it’s modeled on isn’t current either, as it only goes up to 2021, so the AI is, in effect, operating on a lag in terms of available data.

Nextgov: One of the things that some AI experts say is required to eliminate many of the dangers associated with AI is to have the technology be trained and deployed ethically. What actually is ethical AI?

Shah: The formal definition of ethical AI is one that upholds key standards, such as fairness, transparency, privacy, accountability, human centricity, adaptability and contestability. Every AI should be ethical, but we are currently far away from that.

Guttmann: Ethical AI can have a lot of definitions as “ethicality” is a fluid and ever changing concept. But, generally speaking, it refers to building ML models that are thoughtfully and transparently trained. If models are thoughtfully and transparently trained, they’re less likely to be biased or do harm to a company, a group of people or society at large.

Candidly, most ML models don’t need to be filtered through an ethical lens. The overwhelming majority are often automating more rote tasks and activities. In these instances, ethicality is less of a factor. 

But models that have more of a direct impact on a human’s decision making or livelihood should be built morally, ethically and transparently. Models that make decisions about people's access to healthcare or employment, for example, need to be ethical.

Nextgov: Okay, that makes sense. So, how do we ensure that AIs are built ethically in the future?

Shah: We need to first establish clear ethical guidelines for AI development, such as ensuring AI systems respect user privacy and autonomy, promote fairness and non-discrimination and are transparent about their decision-making processes. It’s also important to incorporate ethical considerations into the design of AI systems, such as using AI algorithms that are able to explain their decisions, or using AI systems to promote the public good. 

As AI becomes more prominent, there might be a need to establish independent oversight bodies to help ensure AIs are created in a responsible manner. while educating developers and decision-makers on the potential risks of developing AI without ethical parameters. And finally, there should be investment in research on AI development techniques that mitigate bias in AI systems and promote its use for positive social good.

Guttmann: Ethicality has to be a guiding light from the beginning of a model’s development all the way through to the end. That means allowing for model auditing by humans, educating and training models with diverse datasets, bringing together a wide range of ML experts with different backgrounds to support the ML’s creation, and having explicit rules and guidelines in place for how models can be used or commercialized.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys