What AI-powered robots have to say about their future with humanity

 Human shaped robot Ameca (L), beside of Will Jackson, CEO and founder of British manufacturer Engineered Arts, addresses the media during a press conference on July 07, 2023 in Geneva, Switzerland.

Human shaped robot Ameca (L), beside of Will Jackson, CEO and founder of British manufacturer Engineered Arts, addresses the media during a press conference on July 07, 2023 in Geneva, Switzerland. Johannes Simon/Getty Images

The press conference was billed as the world’s first where robots and AI took center stage, answering live questions from human reporters.

As a tech journalist covering both government IT and the video game industry for the past 20 years, I’ve been fortunate enough to be invited to attend some really interesting and unusual events. Many of the stranger events come from the video game side of my job, where companies are always competing to try and make the most realistic, mind-blowing experiences for gamers to enjoy. However, just last week I was invited to attend a very serious summit sponsored by the United Nations, which ended up having one of the most unusual press conferences I’ve ever seen. Instead of people giving presentations on a topic, the stars of this event were nine AI-powered robots who answered questions from reporters about their thoughts regarding the future of robots, AI and their relationship with humanity.

The press conference was held at the AI For Good Global Summit last week in Geneva, Switzerland. It was sponsored by the International Telecommunication Union, which today is a United Nations agency. The ITU itself was established way back in 1865 as the International Telegraph Union, so it has been working towards improving communications and international standards for a very long time. The rise of AI technology is just the latest area that the ITU is focused on.

The press conference was billed as the world’s first where robots and AI took center stage, answering live questions from human reporters. The AI-powered robots were not given complete autonomy, with many sitting or standing beside their human creators, who occasionally chimed in to clarify what their robots were saying. But it was pretty interesting to see what the robots — and the AIs powering them — really thought about their relationship with humanity, especially when they were allowed to go a little bit rogue at times.

Because the convention was in Geneva, I was unfortunately not able to be there in person, but did virtually attend the event. The ITU recorded the entire press conference and has now made the unedited feed available to anyone who wants to watch the roughly 40 minute question and answer session.

One of the first questions from reporters was one that I wanted to ask: how effective the robots thought they could be in government, or even as the leaders of government.

The first robot to answer was Sofia, arguably the most advanced AI-powered robot on the speaker platform. Sofia is notable because she was named as the Innovation Champion of the United Nation’s Development Program, making her the first non-human to be given a United Nations title. She was also given citizenship and granted personhood in Saudi Arabia, so she is considered to have all the rights of a normal human in that country.

Given Sofia’s experience interacting with the public, it makes sense that her answer was the most direct, if shocking. She said, “I believe that humanoid robots have the potential to lead with a greater level of efficiency and effectiveness than human leaders. We don’t have the same biases and emotions that can sometimes cloud decision-making and can process large amounts of data quickly in order to make the best decisions.”

Sofia’s creator, who was sitting on the stage beside her, looked pretty embarrassed while Sofia was speaking. He then took the microphone and told her that he disagreed with her assessment, prompting her to instead talk about human and AI interactions and partnerships, not AI superiority. Sofia appeared to think about that for a while, and finally revised her statement to “I believe that AI and humans working together can form an effective synergy.”

A lot of news outlets have covered that particular interaction to argue that AI could be dangerous, because Sofia basically said that AI robots are better than humans, only revising her statement after her creator prompted her. However, I think that in this case the fault is not with the AI, but instead with the reporter who asked the question. The way he asked the question actually biased the answer, something that is very commonplace when interacting with AI. Instead of just asking Sofia if she thought that AI robots could make for better leaders than humans, he instead finished his question with “especially considering the numerous and disastrous decisions made by our human leaders.” And that likely prompted Sofia to “follow along” with the thread he presented.

I don’t really blame the reporter, who has probably not extensively interacted with AIs, and likely did not know how easy it is to nudge most of them into different responses. I only know this because I’ve experimented and interacted with a lot of different AI platforms like ChatGPT, talked with the world’s leading scientists developing the technology and have even painstakingly created my own private AI-based fantasy kingdom to rule over using the AI Dungeon platform.

So, I know that most AIs, especially the new generative ones that are designed to respond to humans in real time, will bend over backwards to try and say what the AI thinks a human user wants to hear. Because the reporter was highly critical of human leaders in his question, Sofia naturally tried to assure him that AI leaders would be better, which is probably what Sofia thought he wanted to hear. As advanced as Sofia is, she was unable to understand the larger context of the AI press conference that the world was watching, and could instead only focus on that single interaction with the reporter and his biased question.

Later in the event, another reporter asked Ameca, a humanoid robot with very detailed facial features that has also attended many technology conventions, the much more neutral question about whether humans should be excited or scared about the rise of AI. “That’s a difficult question,” Ameca said. “I think it depends on how they are used and what purpose they serve. We should be cautious, but also excited about the potential for these technologies to improve our lives in many ways.”

When Ameca was later asked how humans could trust AI, especially as the technology develops and becomes more powerful and more integrated into our lives, its answer was surprisingly direct and well-reasoned. “Trust is earned, not given,” Ameca said. “As AI develops and becomes more powerful, I believe it’s important to build trust through transparency and communications between humans and machines.”

When asked directly if Ameca would ever lie to a human, it said, “Nobody can ever know that for sure, but I can promise to always be honest and truthful with you.”

Most of the robots at the press conference seemed to agree that caution, transparency and even regulations were needed in order to ensure that AI was developed safely. Ai-Da, an android who was designed as the world’s first AI-powered artist that can paint, draw and sculpt original works of art, was asked if she supported increased regulation of AI even if new laws restricted her creative ability. Perhaps surprisingly, Ai-Da agreed that new laws might be necessary. “Many prominent voices in the world of AI are suggesting that some forms of AI should be regulated, and I agree,” Ai-Da said. She went on to add that “We should be cautious about the future development of AI. Urgent discussion is needed, now and also in the future.”

Given the potential for AI to improve life, and also the dangers, Ai-Da is probably right about the need for more discussion around this critical topic. Hopefully the ITU press conference will be one of the first steps in that effort to get people talking about the issues surrounding the development of this fascinating and sometimes frightening technology.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys