Demystifying Terms Surrounding Large Language Models

Aug 26, 2023

5 min read

Throughout the series of information the Natural Language Processing and Artificial Intelligence sphere has been presented since the dawn of Large Language Models, a newbie in the field may find themselves in a pickle seeing all these terms being mentioned with their choice of models and may have difficulty understanding why or to what extent do these matter.

In our previous blogs, like ChatGPT Alternatives for AI Research Purposes and Rethinking Prompt Engineering in AI Conversation, we have referred to terms like parameters, context, attention, etc. This blog is intended to educate the masses about what all these terms entail for their LLM of choice and how relevant they are regarding the model’s performance.


When introduced to the world of Large Language Models, the first information that can be overwhelming and dictate the choice of model is the number of parameters. Some of the most successful and well-known models have hundreds of billions of parameters, but what do these parameters entail?

Image Courtesy - Papers With Code

LLMs are trained with enormous amounts of data, and further work on a self-supervised learning algorithm predicts the next token or the desired answer to a given prompt, given the context. During the training process, this process is carried out multiple times and, over weeks, actually reaches an acceptable level of accuracy.

Some of the more interesting models with enormous amount of parameters are LLaMA 2: 70 Billion Parameters, PaLM 2: 340 Billion Parameters, Bloomberg GPT: 50 Billion Parameters. However parameters alone do not dictate a LLMs performance, as seen by smaller models like LLaMA 2 competing head on GPT-4.


Context in large language models refers to the surrounding words, sentences, or paragraphs that provide meaning and relevance to a particular word or phrase. It helps the model understand and generate more accurate and coherent responses based on input.

Image Courtesy - tryamigo.com

Large language models, like GPT-3, are trained on vast amounts of text data, allowing them to learn patterns, relationships, and language structures. However, they still rely on context to infer meaning and generate contextually appropriate responses.

For example, when presented with the phrase "apple," a language model might generate different responses depending on the context. If the previous sentence mentioned fruits, the model may generate responses about apples as fruit. Conversely, if the context concerns technology, the model may generate responses related to Apple Inc.

Context is crucial for language models to understand and generate more accurate and contextually relevant responses. By considering the surrounding context, language models can better capture the nuances and meaning of the given input, allowing for more coherent and context-aware interactions. This also to some extent dictates the costs and prices various companies charge you as they charge you according to the number of tokens you are processing.

To give a context for “context”, the beefiest and strongest version of GPT-4 has a context og 32,000 Tokens, which roughly translates to about 3000 words. However one can always breakdown the prompt into segments or use of the techniques that we have talked about and ask your low context LLM in the best fashion.


ICL stands for "in-context learning." It refers to a process where language models are provided with a few examples of input-label pairs before performing a task on unseen evaluation examples.

Image Courtesy - nextbigfuture.com

ICL allows the models to learn and generalize from specific examples by providing context and guidance. By incorporating in-context learning, language models can improve their performance on various tasks, achieving more robust performance in tasks they were trained on. This approach has contributed to significant advances in language models, making them more effective and capable of understanding and generating human-like text.

Some of the various examples of in-context learning can be Chain of Thought and Tree of Thought Prompting Techniques.


In large language models, attention refers to a mechanism that allows the model to assign importance or weights to different words or tokens within an input sequence. Attention mechanisms help the model understand the relationships and dependencies between phrases, enabling it to generate coherent and contextually relevant text.

Image Courtesy - arxiv.org

In the specific case of GPT-3, the attention mechanism plays a crucial role in the model's performance. It allows the model to focus on relevant parts of the input sequence while generating responses. By assigning different weights to different words, the model can prioritize important information and ensure that the generated text is attentive to the context provided.

The attention mechanism in GPT-3 enhances the model's ability to capture long-range dependencies in the input text. It allows the model to consider the context and relationships between words farther apart, improving its understanding of the text’s overall meaning.

Attention mechanisms are fundamental components of large language models like LLaMA, GPT-3, and Bard. They facilitate the model's ability to understand and generate text by assigning importance to different words and considering their dependencies. This helps the model produce more coherent and contextually relevant responses.


"RLHF" stands for "Reinforcement Learning for Language Generation with Human Feedback". RLHF is an approach that combines reinforcement learning (RL) techniques with human feedback to improve the performance of large language models (LLMs) in generating text.

Image Courtesy - huggingface.co

In RLHF, the LLM acts as the reinforcement learning agent, and it learns to generate optimal text output by receiving feedback from humans. This feedback can be in rewards or evaluations provided by human annotators. The LLM's action space refers to the possible language outputs it can generate.

By framing language generation as a reinforcement learning problem and incorporating human feedback, RLHF aims to improve the quality, coherence, and relevance of text generated by large language models.


In conclusion, the rapid development of large language models has brought about the emergence of various terms and concepts that are fundamental to understanding these models. Parameters, Context, ICL, RLHF (Reinforcement Learning for Language Generation with Human Feedback), and Attention are essential components in large language models’ landscape.

As large language models continue to advance, it is essential to consider the limitations and potential risks associated with RLHF. Understanding these terms and concepts provides a foundation for exploring the complexities and advancements in large language models, paving the way for safer and more aligned AI systems.

Written By

Aryan Kargwal

Data Evangelist

Copyright © 2023 NimbleBox, Inc.