New Google tool lets budding generative AI scientists practice their prompting

Thapana Onphalai/Getty Images

Generative AI has the potential to solve a lot of problems, but needs to be properly directed to do so.

Generative AI made inroads almost everywhere in 2023, and is very likely poised to do even more this year. In government this is especially true, now that the ethical guidelines for proper AI usage have been established through projects like the Government Accountability Office’s AI Accountability Framework for Federal Agencies and the White House Office of Science and Technology Policy’s AI Bill of Rights. Even the global shortage of dedicated chips required to power generative AI as it dives into its large language models is likely to be resolved very soon. In fact, the only thing that might still hold AI back from reaching its true potential is how skillfully its human users can interact with it.

As impressive as generative AI is, it can’t do very much on its own. It has the potential to solve a lot of problems, but needs to be properly directed to do so. And that is the realm of a new kind of technology worker called a prompt engineer. This is a relatively new field, but also a critical one for future AI development. It’s so new that when searching resume sites for qualified prompt engineers, you won’t be able to find anyone with more than a couple years of experience at most, and nobody has a degree for it either. But those jobs pay really well, with top prompt engineers earning up to $300,000 per year.

The job of a prompt engineer is to skillfully write queries that can be given to a generative AI so that it can provide accurate answers to complex questions. And that is a lot more difficult than it sounds, since AIs have no real understanding of context or subtlety. If a prompt or question is not detailed enough, the AI might return the wrong answer or even a nonsensical one. Even worse, most generative AIs will never say that they don’t know an answer, so if a prompt does not provide the AI with enough context, it could very likely hallucinate or make something up. 

Yet another problem is the fact that if you ask a generative AI the same question twice, it will sometimes give different answers. Occasionally that is the fault of the AI, but most times it happens because the prompt given to it was not detailed enough to lock in a specific answer. That was one of the main concerns of some of the government officials I talked with during a panel discussion I recently moderated, with some asking how the government could rely on AI if it sometimes gave different answers to the same question. Ultimately, they concluded that the solution might come down to better prompt engineering. 

“We have to make sure that our workforce is properly trained and educated on what generative AI is, what the prompt engineering is designed to do and what kind of a product they will ultimately be producing,” said Director of Information Technology for the Transportation Security Administration Balaji Subramaniam.

Practicing better prompting

One of the most important skills that a prompt engineer can have is being able to think like a generative AI, or more specifically, knowing how a specific AI is likely to react in response to different prompts. It’s about knowing the limitations of the technology and interacting with an entity that really does not understand context other than what is carefully explained to it each and every time. Put another way, generative AI has a lot of potential intelligence, but not much wisdom.

And the only way to really get experience in prompt engineering is through practice. The best prompt engineers have spent a long time performing trial and error experiments with the specific AIs that they work with, and learning from that experience. I went through that myself for many months while I was trying to learn the ins and outs of AI Dungeon, a generative AI-powered game that lets you create elaborate fantasy universes to play in, but only if you can really get into sync with the AI powering your worlds. 

Not everyone can earn one of those very high paying prompt engineer jobs, but everyone can improve their prompting skills with practice. Given how much AI is starting to become a part of our lives, it’s a skill that everyone should probably have. To that end, Google recently launched a fun little game called Say What You See by Google Artist in Residence Jack Wild.

Even though it has a simple-looking interface, Say What You See actually has a lot of advanced features. Players are shown a drawing, photograph, painting or other image that was generated by AI, but not the prompt that was used to create the scene. The player’s job is to act like a prompt engineer and explain to the AI how to perfectly recreate the image. Once you have your prompt in place, which is limited to 120 characters, the game will feed it into the Google AI and generate a new image. Players are then scored on how closely their image matches the original.

If your image turns out to be an inaccurate match, which will probably happen a lot at first, the game will give you hints about how to improve your prompt and let you try again. You need to get a certain percentage match across three images to pass each level of the game, with the difficulty increasing as you — hopefully — improve your prompt engineering skills. 

I had a lot of fun with Say What You See and even improved my prompt engineering skills. I never got close to a 100% match, however. Maybe if someone does, Google will offer them one of those lucrative prompt engineer jobs, or perhaps that is just wishful thinking. In any case, the game is fun to play, and should help anyone improve their AI prompting, which will be an increasingly valuable skill as the technology continues to evolve.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys