/

LLMOps

Advanced Prompt Engineering: Power of LLMs in Conversations

Aug 31, 2023

5 min read

In the previous blog by NimbleBox.ai, we briefly introduced Prompt Engineering and some beginner-friendly tips to instantly boost your Prompts to get the most out of your Large Language Model. Now, let us take that a step further and discuss the various techniques writers and users use worldwide to improve the solutions generated by LLM.

In the following techniques, we will focus on how you can correct your prompts using techniques like multi-shot prompting, chains, trees, etc. These methods are derived from how, in real life, while trying to get a solution from someone, the context needs to be corrected with references to prior prompts, etc.

All the techniques have been implemented on ChatNBX, our state-of-the-art Chat Bot that allows users to play around with the biggest and most popular LLMs in the sphere at a click of a button.


Zero-Shot Prompting

Zero-Shot prompting provides a prompt to the significant language model, which is not exactly a part of the training data for the model. Still, the model can generate the desired result. This particular ability of Large Language Models is one of the leading reasons they are so successful.

Zero-Shot Prompting can be used for tasks like Sentiment Analysis of statements and functions like summarizing a content piece. Let us see an example of such prompts:


Few-Shot Prompting

While LLMs have proven themselves more than effective in dealing with Zero-Short Prompting, they may fall short on more complete tasks, which may require some context because of some words the model hasn’t seen before. (Zero-Shot prompting can magically work because of some common aspects of your input with the data it has been trained upon).

Few-Shot Prompting provides a context or direction to the model first, ensuring that the final solution it gives is desired. Let us see an example of such a prompt:


System Prompting

System Prompting is a very advanced yet unique extension to Few-Shot Prompting. Taking a step up from giving the model context for a solution that needs to be generated using multiple prompts, the way LLMs are trained, they can be reprogrammed to roleplay as a particular character, profession, etc., which will automatically give the LLM the vast amount of context that comes with being a professional in the field. Let us see an example of the same:

As you can see in the example, no intricate context had to be given to the model to make it understand which “Hyperparameters” we are talking about and just making the model behave like a technical content writer with expertise in AI/ML related content made it localize and choose a set of information to work from.


Chain-of-Thought Prompting

Chain-of-Thought is a prompting technique introduced in Chain-of-Thought Prompting Elicits Reasoning in Large Language Models by Wei et al. (2022). Chain-of-Thought prompting is an approach to improve the reasoning ability of the model and can be considered as yet another iteration of Few-Shot Prompting. This technique focuses on increasing the arithmetic, common sense, and symbolic thinking of tasks and going more literally. It creates a chain of thought with the model to let it remember the prior discussion and help it tackle a complex problem step-by-step.

Fig. Brainstorming Process

Fig. Building upon the Prompt

Fig. Using the Brainstorming and Built Up Thought to answer a third related question


Tree-of-Thought Prompting

Chain-of-Thought has been noted to do wonders where we are dealing with simple prompts that, although complex, require significantly less evaluation and expansion for the arrival of a good solution, however in a setting where possibilities for a good answer are numerous and a certain level of ranking is requested by the user taking every aspect of the prompt in mind, Tree-of-Thought is the ideal choice. Introduced by Yao et al. (2023) in Tree of Thoughts: Deliberate Problem Solving with Large Language Models.

This method allows the LLM to make deliberate decisions by considering various paths and outcomes and sel-evaluating them to decide the next course of action. The plan has three overall steps that dictate the solutions and allow the user to course correctly to get the required answers. The first step is the Brainstorming Phase, where the LLM comes up with different keys; the Evaluation Phase, where the user will provide context and ask the LLM to list the pros and cons of each possible solution; and the Decision Phase, where attaching some sort of metric and weight to each key in a numeric sense, the LLM decides a final ranking to come to the best solution.

Let us see some snapshots from how a Tree-of-Thought Prompting conversation goes. You can also check out the conversation where we have deeply explored this technique at the following link: Tree-of-Thought Prompting

Fig. Brainstorming Process Part 1

Fig. Brainstorming Process Part 2

Fig. Evaluation Phase

Fig. Decision Phase

Conclusion

We explored several advanced techniques for prompt engineering when interacting with LLM-based chatbots. From zero-shot prompting to a chain of thought prompting, these methods can significantly enhance the conversational experience and improve the accuracy of the chatbot's responses.

By leveraging the power of these techniques, users can engage in more effective and efficient dialogues with chatbots, leading to better outcomes and greater satisfaction. Whether you're looking to jumpstart a conversation or push the boundaries of what's possible with LLM-based chatbots, these prompt engineering techniques will be valuable additions to your toolkit.

As LLM technology continues to evolve and improve, staying up-to-date on the latest advancements in prompt engineering is essential. By doing so, you'll be well-positioned to take full advantage of all these chatbots offer and discover new and innovative ways to communicate with them.

Written By

Aryan Kargwal

Data Evangelist

Copyright © 2023 NimbleBox, Inc.