A paper published in February by Rick Battle and Teja Gollapudi of VMware's NLP Lab called into question the necessity for one of the new roles that emerged with the advent of commercially available generative AI: that of the prompt engineer.
The paper, "The Unreasonable Effectiveness of Eccentric Automatic Prompts," confirmed the need for effective prompts for a LLM to demonstrate its versatile problem solving and basic mathematic abilities, but suggested the automatically generated prompts generated by the LLM at hand were more effective than those found through human trial and error.
The report concludes: "And while the prompts they generated may appear shocking to an experienced practitioner, it’s undeniable that the automatically generated prompts perform better and generalize better than 'handtuned positive thinking' prompts."
The study raises many questions, one of the most important being whether organizations need a prompt engineer at all or whether they should just use prompts generated by LLMs themselves — a kind a generative AI version of the tail-eating ouroboros.
What a Prompt Engineer Does
While the role of the prompt engineer gained widespread recognition in 2023, the function itself can be traced back to the emergence of the first full generative AI model and the need to extract information from it using prompts.
The dramatic explosion of interest in generative AI over the last 18 months has resulted in the formalization of the title to the point where there's now a Prompt Engineering and AI Institute (PEAI).
PEAI defines prompt engineers as “professional[s] who specializes in designing, optimizing, and refining prompts or inputs for AI systems, particularly generative models, to elicit accurate, relevant, and desired responses.”
It further details the functions of a prompt engineer to include:
- Crafting AI prompts to obtain reactions responses to given queries.
- Testing and analyzing outputs from the AI by experimenting with different prompts.
- Assessing the bias in an output.
- Identification of what is lacking in AI-generated output and reposing with refined prompts to optimize output.
- Embedding AI prompts into applications and software for use in automating complex or repetitive tasks.
- Working on cross-functional teams to develop products.
It goes on, but what is striking about the list is that most organizations likely have staff members who are doing much of this already.
Related Article: Generative AI, The Great Productivity Booster?
To Use or Not to Use Prompt Engineers
Like all digital workplace technologies, questions around generative AI management and use depends on the use case scenarios of the individual organization. "If the company is regularly dealing with generative AI and continuously needs to create and optimize prompts, then a prompt engineer makes sense," Michael Ryaboy, a prompt engineer at KX said. It is especially true when creativity is required — at least for now, he continued.
Sometimes prompt engineers find themselves in less AI-forward environments, and their role becomes largely educational — teaching both coworkers and leaders how to effectively apply generative AI to their products. Other times, they save software engineers valuable time by removing prompting from their workload, Ryaboy said.
“At the end of the day, if a prompt engineer will help you reach your organization's goals faster, then you 'need' a prompt engineer,” he said.
However, he allows there are cases where it's better to let the LLM prompt itself. He cites the example of classification tasks as an area where the LLM is usually better at crafting an effective prompt.
“I've automated prompts in the past, and it's still something that is done in conjunction with a person. Still, it depends on the specific task and the resources you are willing to put in,” he added. "Automating prompts can range from having an LLM recursively improve the prompt, to putting a prompt in production and checking outliers to find problematic edge cases that need to be addressed with a prompt tweak or additional engineering.”
Some tasks aren't worth really worrying about prompts, he said, as engineering time is best spent elsewhere. Improving, testing and monitoring prompts remains an open problem in the space, as small changes can have large impacts on the prompt's behavior — a conclusion that Battle and Gollapudi of VMWare reached in their experiments.
This is why it's usually less important to automate prompts than having systems in place where an engineer is comfortable modifying it incrementally or otherwise, and with tests in place to make sure the results don't go haywire. That said, Ryaboy thinks automated prompting solutions will quickly improve, which will change how engineers dealing with prompts work for the better.
Related Article: Generative AI Results Should Come With a Warning Label
The role of LLMs
Summit Search Group's Matt Erhard said organizations need prompt engineering, but the skills may already exist in the organization.
"Organizations need prompt engineering, but that does not mean they all need to have a dedicated role for this task. I also don’t think this means that they will rely on LLMs exclusively for the task," he said.
He sees prompt engineering as becoming one of the skills companies look for when hiring marketing team members, web designers, social media managers, content editors and writers, and others who may make use of generative AI.
Even those organizations that use LLMs for prompt generation need a human touch guiding those efforts to ensure they are using the right kinds of prompts to produce the desired output, he continued. LLMs may also be a solution for organizations that cannot afford prompt engineer, or do not have enough experience in the workplace to be effective with prompts.
“It’s more efficient, and it can be a better option for organizations that don’t have team members with skills and expertise in prompt engineering,” he said. "An LLM will by and large produce better prompts than a person who is inexperienced with using generative AI.”
Do organizations needs prompt engineers? As with almost every tool or method in technology, the answer is it depends, said Filipe Torqueto, US head of solutions architecture for Sensedia. We're in an adaptation phase to this new period of interaction with AI tools.He argues that in cases where AI is being used for content generation, the prompt is the communication between AI and humans, so it's not just about the result. It also becomes a communication problem that the AI needs to understand the context. Much like the evolution of AI-driven solutions in 2024, Prompt engineering is evolving rapidly, just as AI-driven solutions are, Torqueto said. Those advances are creating new approaches, such as:
- Multimodal prompt engineering: The ability to mix processing text, images and sometimes audio to better interact.
- Adaptive prompting techniques: A trend where AI models adjust their responses to be more personalized based on user preferences.
- Real-time prompt optimization: In this case, AI provides instant feedback on the prompts to ensure clarity and alignment.
"Prompt engineering is a sophisticated practice crucial for optimizing interactions between humans and AI models. It involves creating, refining and optimizing prompts to guide AI responses effectively,” he said.
The field requires a blend of technical skills, including a deep understanding of Natural Language Processing (NLP), familiarity with LLMs, and non-technical skills like communication, creativity, and critical thinking.
“AI vs. human prompting needs a balance. While human prompt engineers are crucial for their creativity, context and small nuances, AI can collaborate in the future to make the process easier for us," Torqueto said.