Mastering LLM Accuracy of ServiceNow LLM Applications with Prompt Engineering

Описание к видео Mastering LLM Accuracy of ServiceNow LLM Applications with Prompt Engineering

While working on a multi-agent project, I realized that, even today, the accuracy of outputs from Large Language Models (LLMs) heavily depends on prompt engineering. In the paper arXiv:2404.11584, researchers evaluated various types of multiple agents and their impact. One of the agents that demonstrated exceptionally high accuracy was the Reflexion agent. According to another paper, arXiv:2303.11366, the primary advantage of this agent architecture is that it receives verbal feedback from one of the personas, which helps in providing a better-formulated response.
There are many prompt styles, but the most common and easily implemented ones are part of a Single Agent Architecture. Here are three key styles:
Direct Prompting
In this style, we ask the LLM to assume a domain-specific role and provide answers based on that role.
Example Prompt:
“You are a CMDB Implementation Specialist and a ServiceNow Architect. You provide answers in bullet points and explain each part of the query in a precise and verbose manner. You keep best practices of the implementation in mind before providing an answer.”
Chain of Thought Prompting (arXiv:2201.11903)
This technique explores how the simple method of chain-of-thought reasoning can increase the accuracy of LLM responses for complex tasks. It reflects the human tendency to break down complex tasks into simpler ones.
Example Prompt:
*“You are a CMDB Implementation Specialist and a ServiceNow Architect. Before you answer any query, follow these steps:
[Think] - Break down the complex query into simpler thoughts and identify the information you already have.
[Feedback] - If any information is missing, ask the user for additional details before answering the query.
[Response] - If you are satisfied with the information, provide the response; otherwise, repeat steps 1 and 2.”*
ReAct Agent Prompting Style (arXiv:2210.03629)
This style integrates Reasoning + Actions in LLM prompts, allowing the LLM not only to rely on its knowledge or user feedback but also to use tools to enhance its understanding and provide better feedback. This paper marked the advent of autonomous agent creation for LLMs.
Example Prompt:
*“You are a ServiceNow CMDB Implementation Specialist. You understand the intricacies of CMDB relationships. Your primary function is to understand the user’s query, whether directly or indirectly related to CMDB, and provide a response by checking the information in the CMDB of the current instance.
To formulate a response, follow this method:
[Think] - Do you have all the information required to answer the query immediately? If yes, respond.
If no, repeat steps 1-3, but do not repeat more than three times.”*

Комментарии

Информация по комментариям в разработке