Pour échanger, demandez l’accès au :
Nous avons le projet d’une infolettre, seriez-vous intéressé ?
03/04/2024
•Tags : AI, IA, LLM, RAG
Toutes les notes de veille : [[+ Sommaire veille]] Date de récolte : [[2024-04-03-mercredi]]
Excellente et détailllée présentation de tous les aspects d'une application de RAG (y compris le LLM et son entraînement)
Learn Large Language Models ( LLM ) through the lens of a Retrieval Augmented Generation ( RAG ) Application.
· What Are Large Language Models?
· Language Modeling (LM)
· Foundation Models and LLMs
· Architecture of LLMs
· Pre-Training
· Data Parallel Training Techniques
∘ Distributed Data Parallel (DDP)
∘ Fully Sharded Data Parallel (FSDP)
· Fine-Tuning
∘ PEFT
∘ Transfer Learning
∘ Adapters
∘ LoRA — Low-Rank Adaptation
∘ QLoRA
∘ IA3
∘ P-Tuning
∘ Prefix Tuning
∘ Prompt Tuning (Not Prompt Engineering)
∘ LoRA vs Prompt Tuning
∘ LoRA and PEFT in comparison to full Fine Tuning
· LLM Inference
· Prompt Engineering
∘ Few-Shot Prompting
∘ Chain-of-Thought (CoT) Prompting
∘ PAL (Program-Aided Language Models)
∘ ReAct Prompting
· Model Optimization Techniques
∘ Quantization
∘ Distillation
∘ Pruning
· Wrap Up!
Greetings!
So till now, we have learnt how the Raw Data is transformed and stored in Vector Databases. Then relevant chunks are retrieved back from the Vector Database based on the user prompt. This completes the Retrieval Part of the Application.
Next, we will focus on the Generation part of the RAG Application. So for text generation, we will be using Large Language Models.
![[attachments/86a4464b22b093594696d0de42ba2bdd_MD5.png]]
Image by Author
Large Language Models (LLM) are very large deep learning models that are pre-trained on vast amount of data. The underlying transformer is a set of neural networks that consist of an encoder and a decoder with self-attention capabilities. The encoder and decoder extract meanings from a sequence of text and understand the relationships between words and phrases in it.
![[attachments/72a704a3959130b4f482fbc19bb881da_MD5.png]]
![[attachments/4a9f59e612d58c5756ba9ee26aa8b8c0_MD5.png]]
Timeline of some of the most representative LLM frameworks ( so far ).
Transformer neural network architecture allows the use of very large models, often with hundreds of billions of parameters. Such large-scale models can ingest massive amount of data, often from the internet, but also from sources such as the Common Crawl, which comprises more than 50 billion web pages, and Wikipedia, which has approximately 57 million pages.
One of the key instruments of NLP applications is language modeling.
![[attachments/c3dd783a652d8cdf55e6a1c6da5a2b88_MD5.png]]
This figure shows different components of LLMs.
Can the processes of language and communication be reduced to computation?
Language models generate probabilities by learning from one or more text corpus. A text corpus is a language resource consisting of a large and structured set of texts in one or more languages. Text corpus can contain text in one or multiple languages and is often annotated.
![[attachments/1d35d6353fed6c2eb76ecf89b4412c2c_MD5.png]]
One of the earliest approaches for building a language model is based on the n-gram. An n-gram is a contiguous sequence of n items from a given text sample. Here, the model assumes that the probability of the next word in a sequence depends only on a fixed-size window of previous words:
![[attachments/0a98a15f7e596d746746d665395d6c69_MD5.jpg]]
However, n-gram language models have been largely superseded by neural language models. It’s based on neural networks, a computing system inspired by biological neural networks. These models make use of continuous representations or embeddings of words to make their predictions:
![[attachments/8c250dc46b3647d1db63445906bb8533_MD5.jpg]]
Neural networks represent words distributed as a non-linear combination of weights. Hence, it can avoid the curse of dimensionality in language modeling. There have been several neural network architectures proposed for language modeling.
This is quite a departure from the earlier approach in NLP applications, where specialized language models were trained to perform specific tasks. On the contrary, researchers have observed many emergent abilities in the LLMs, abilities that they were never trained for.
For instance, LLMs have been shown to perform multi-step arithmetic, unscramble a word’s letters, and identify offensive content in spoken languages. Recently, ChatGPT, a popular chatbot built on top of OpenAPI’s GPT family of LLMs, has cleared professional exams like the US Medical Licensing Exam!
A foundation model generally refers to any model trained on broad data that can be adapted to a wide range of downstream tasks. These models are typically created using deep neural networks and trained using self-supervised learning on many unlabeled data.
![[attachments/34532b13b8815a6dde01fbd173ee40c6_MD5.png]]
Nevertheless, LLMs are typically trained on language-related data like text. But a foundation model is usually trained on multimodal data, a mix of text, images, audio, etc. More importantly, a foundation model is intended to serve as the basis or foundation for more specific tasks:
![[attachments/5b11fd632998765a4e7df5012959237e_MD5.jpg]]
Foundation models are typically fine-tuned with further training for various downstream cognitive tasks. Fine-tuning refers to the process of taking a pre-trained language model and training it for a different but related task using specific data. The process is also known as transfer learning.
Most of the early LLMs were created using RNN models with LSTMs and GRUs. However, they faced challenges, mainly in performing NLP tasks at massive scales. But, this is precisely where LLMs were expected to perform. This led to the creation of Transformers!
Earlier Architecture of LLMs
When it started, LLMs were largely created using self-supervised learning algorithms. Self-supervised learning refers to the processing of unlabeled data to obtain useful representations that can help with downstream learning tasks.
Quite often, self-supervised learning algorithms use a model based on an artificial neural network (ANN). We can create ANN using several architectures, but the most widely used architecture for LLMs were the Recurrent Neural Network (RNN).
![[attachments/7118d6b3190db110d8476abe6c2899ca_MD5.jpg]]
Now, RNNs can use their internal state to process variable-length sequences of inputs. An RNN has both long-term memory and short-term memory. There are variants of RNN like Long-short Term Memory (LSTM) and Gated Recurrent Units (GRU).
Problems with LSTMs & GRUs
A RNN that uses LSTM units is very slow to train. Moreover, we need to feed the data sequentially or serially for such architectures. This does not allow us to parallelize and use available processor cores.
Alternatively, an RNN model with GRU trains faster but performs poorly on larger datasets. Nevertheless, for a long time, LSTMs and GRUs remained the preferred choice for building complex NLP systems. However, such models also suffer from the vanishing gradient problem:
![[attachments/8f843e82a98bf0bbd8cd495cb7424f30_MD5.jpg]]
Attention Mechanism
Some of the problems with RNNs were partly addressed by adding the attention mechanism to their architecture. In recurrent architectures like LSTM, the amount of information that can be propagated is limited, and the window of retained information is shorter.
However, with the attention mechanism, this information window can be significantly increased. Attention is a technique to enhance some parts of the input data while diminishing other parts. The motivation behind this is that the network should devote more focus to the important parts of the data:
![[attachments/21901ad6cd1ffa7b99cc29d35edeec69_MD5.jpg]]
There is a subtle difference between attention and self-attention, but their motivation remains the same. While the attention mechanism refers to the ability to attend to different parts of another sequence, self-attention refers to the ability to attend to different parts of the current sequence.
Self-attention allows the model to access information from any input sequence element. In NLP applications, this provides relevant information about far-away tokens. Hence, the model can capture dependencies across the entire sequence without requiring fixed or sliding windows.
Arrival of Transformers
The RNN models with attention mechanisms saw significant improvement in their performance. However, recurrent models are, by their nature, difficult to scale. But, the self-attention mechanism soon proved to be quite powerful, so much so that it did not even require recurrent sequential processing.
The introduction of transformers by the Google Brain team in 2017 is perhaps one of the most important inflection points in the history of LLMs. A transformer is a deep learning model that adopts the self-attention mechanism and processes the entire input all at once:
![[attachments/7f7223c7e686596264762eb1a25682dc_MD5.jpg]]
As a significant change to the earlier RNN-based models, transformers do not have a recurrent structure. With sufficient training data, the attention mechanism in the transformer architecture alone can match the performance of an RNN model with attention.
Another significant advantage of using the transformer model is that they are more parallelized and require significantly less training time. This is exactly the sweet spot we require to build LLMs on a large corpus of text-based data with available resources.
Encoder-Decoder Architecture
Many ANN-based models for natural language processing are built using encoder-decoder architecture. For instance, seq2seq is a family of algorithms originally developed by Google. It turns one sequence into another sequence by using RNN with LSTM or GRU.
The original transformer model also used the encoder-decoder architecture. The encoder consists of encoding layers that process the input iteratively, one layer after another. The decoder consists of decoding layers that do the same thing to the encoder’s output:
![[attachments/709283868a0a4c8ded6f0ca1c423740a_MD5.png]]
The Transformer — High-Level Architecture
The function of each encoder layer is to generate encodings that contain information about which parts of the input are relevant to each other. The output encodings are then passed to the next encoder as its input. Each encoder consists of a self-attention mechanism and a feed-forward neural network.
Further, each decoder layer takes all the encodings and uses their incorporated contextual information to generate an output sequence. Like encoders, each decoder consists of a self-attention mechanism, an attention mechanism over the encodings, and a feed-forward neural network.
![[attachments/d18bc393d9efb6d92c0c89c1d7775281_MD5.png]]
During this phase, the model is pre-trained on a large amount of unstructured textual datasets in a self-supervised manner. The main challenge in pretraining is computational cost.
GPU RAM required to store 1B parameter model
=> 1 parameter –> 4 bytes (32-bit float)
=> 1B parameter –> 4*10⁹ bytes = 4GB
GPU RAM Required for 1B paramter model = 4GB@32 bit full precission
Let’s calculate the memory required to train the 1B parameter model:
Model Parameter --> 4 bytes per parameter
Gradients --> 4 bytes per parameter
ADAM Optimizer (2 states) --> 8 bytes per parameter
Activations and temp memory (variable size) --> 8 bytes per parameter (high-end estimate)
==> 4 bytes parameter + 20 extra bytes per paramter
So, the memory needed to train is ~20X the memory needed to store the model.
Memory needed to store 1B parameter model = 4GB@32 bit full precision
Memory needed to train 1B parameter model = 80GB@32 bit full precision
Distributed Data Parallel (DDP) requires model weights and all other additional parameters, gradients, and optimizer states that are needed for training fit in a single GPU. If the model is too big, model sharding should be used instead.
![[attachments/1fac56499b3512fa2d047e51f5ce7d49_MD5.png]]
Fully Sharded Data Parallel (FSDP) reduces memory by distributing (sharding) the model parameters, gradients, and optimizer states across GPUs.
![[attachments/76131ad3b42f8eb364af829604a51b30_MD5.png]]
Fine-tuning helps us get more out of pre-trained large language models (LLMs) by adjusting the model weights to better fit a specific task or domain. This means you can get higher quality results than plain prompt engineering at a fraction of the cost and latency.
![[attachments/1f8feedc3380b37757b57e069aafce97_MD5.png]]
Why fine-tune LLM?
Compared to prompting, fine-tuning is often far more effective and efficient for steering an LLM’s behavior. By training the model on a set of examples, you’re able to shorten your well-crafted prompt and save precious input tokens without sacrificing quality. You can also often use a much smaller model. That, in turn, translates to reduced latency and inference costs.
For example, a fine-tuned Llama 7B model can be astronomically more cost-effective (around 50 times) on a per-token basis compared to an off-the-shelf model like GPT-3.5, with comparable performance.
How Fine-Tuning Works ?
As mentioned, fine-tuning is tweaking an already-trained model for some other task. The way this works is by taking the weights of the original model and adjusting them to fit a new task.
Models when trained learn to do some specific task, for example, GPT-3 has been trained on a massive dataset and as a result, it has learned to generate stories, poems, songs, letters, and a lot of other things. One can take this ability of GPT-3 and fine-tune it on a specific task like generating answers to customer queries in a specific manner.
There are different ways and techniques to fine-tune a model, the most popular being transfer learning. Transfer learning comes out of the computer vision world, it is the process of freezing the weights of the initial layers of a network and only updating the weights of the later layers. This is because the lower layers, the layers closer to the input, are responsible for learning the general features of the training dataset. And the upper layers, closer to the output, learn more specific information which is directly tied to generating the correct output.
Here is a quick visualization of how fine-tuning works:
![[attachments/4a1472038c27c1be2a5138169cba4490_MD5.gif]]
PEFT, Parameter Efficient Fine-Tuning, is a set of techniques or methods to fine-tune a large model in the most compute and time-efficient way possible, without losing any performance which we might see from full fine-tuning. This is done because with models growing bigger and bigger like BLOOM which has a whooping 176 billion parameters, it is almost impossible to fine tune them without spending tens of thousands of dollars. But it is sometimes almost necessary to use such big models for better performance. This is where PEFT comes in. It helps you solve the problems faced during such big models.
Here are some PEFT techniques:
![[attachments/f40c33b10a92487ab5c40f4f30cd947c_MD5.png]]
Transfer learning is when we take some of the learned parameters of a model and use them for some other task. This sounds similar to fine-tuning but is different. In finetuning, we re-adjust all the parameters of the model or freeze some of the weights and adjust the rest of the parameters. But in transfer learning, we use some of the learned parameters from a model and use them in other networks. This gives us more flexibility in terms of what we can do. For example, we cannot change the architecture of the model when fine-tuning, this limits us in many ways. But when using transfer learning, we use only a part of the trained model, which we can then attach to any other model with any architecture.
Transfer learning is often seen in NLP tasks with LLMs where people use the encoder part of the transformer network from a pre-trained model like T5 and train the later layers.
Adapters were one of the first parameter-efficient fine-tuning techniques released. In the paper, showed that we can add more layers to the pre-existing transformer architecture and only finetune them instead of the whole model. They showed that this technique resulted in similar performance when compared to complete fine-tuning.
![[attachments/40455a3d289500038d50b49c747601c5_MD5.png]]
On the left, there is the modified transformer architecture with added adapter layers. We can see adapter layers are added after the attention stack and the feed-forward stack. And on the right, we can see the architecture of the adapter layer itself. The adapter layer comprises a bottleneck architecture, it takes the input and narrows it down to a smaller dimension representation and then passes it through a non-linear activation function, and then scales it back up to the dimension of the input. This makes sure that the next layer in the transformer stack will be able to receive the generated output from the adapter layer.
In the paper, the authors show that this method of fine-tuning is comparable to complete fine-tuning while consuming much less compute resources and training time. They were able to attain 0.4% of full fine-tuning on the GLUE benchmark while adding 3.6% of the parameters.
![[attachments/1432374ea44d956b3e1ac5142e8aa543_MD5.png]]
LoRA is a similar strategy to Adapter layers but it aims to further reduce the number of trainable parameters. It takes a more mathematically rigorous approach. LoRA works by modifying how the updatable parameters are trained and updated in the neural network.
Let’s explain mathematically. We know that the weights matrices of a pre-trained neural network are full rank, meaning each weight is unique and can’t be made by combining other weights. But in this paper authors showed that when pretrained language models are adjusted to a new task the weights have a lower “intrinsic dimension”. Meaning, that the weights can be represented in a smaller matrix, or that it has a lower rank. This in turn means that during backpropagation, the weight update matrix has a lower rank, as most of the necessary information has already been captured by the pre-training process and only task-specific adjustments are made during fine-tuning.
A much simpler explanation is that during finetuning only a very few weights are updated a lot as most of the learning is done during the pretraining phase of the neural network. LoRA uses this information to reduce the number of trainable parameters.
![[attachments/50d2056112ec657d48bff614f66b99b4_MD5.png]]
Using GPT-3 175B as an example, LoRA research team demonstrated that a very low rank (i.e., r in Figure 1 can be one or two) suffices even when the full rank (i.e., d) is as high as 12,288, making LoRA both storage and compute efficient.
Figure 2 demonstrates that matrix A[d X r] and B[r X k] will be [d X k] while we can vary the r. A very small r will lead to fewer parameters to turn. While it will shorten the training time, it also could result in information loss and decrease the model performance as r becomes smaller. However, with LoRA even at low rank order, performance was as good as or better than fully trained models.
LoRA Fine-tuning with HuggingFace
To implement LoRA finetuning with HuggingFace, you need to use the PEFT library to inject the LoRA adapters into the model and use them as the update matrices.
from transformers import AutoModelForCausalLM
from peft import get_peft_config, get_peft_model, LoraConfig, TaskType
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=True) # load the model
peft_config = LoraConfig(
task_type=TaskType.CAUSAL_LM, inference_mode=False, r=32, lora_alpha=16, lora_dropout=0.1,
target_modules=['query_key_value'] # optional, you can target specific layers using this
) # create LoRA config for the finetuning
model = get_peft_model(model, peft_config) # create a model ready for LoRA finetuning
model.print_trainable_parameters()
Once this is done, you can train the model as you would normally do. But this time it will take much less time and compute resources as it normally does.
Efficiency of LoRA
Authors in the paper show that LoRA can outperform full finetuning with only 2% of total trainable parameters.
![[attachments/094893b4b3a433a2b26b36b3e9e58da5_MD5.png]]
As for the number of parameters it trains, we can largely control that using the rank r parameter. For example, let’s say the weight updation matrix has 100,000 parameters, A being 200 and B being 500. The weight updation matrix can be decomposed into smaller matrixes of lower dimensions, A being 200 x 3 and B being 3 x 500. This gives us 200 x 3 + 3 x 500 = 2100 trainable parameters only, which is only 2.1% of the total number of parameters. This can be further reduced as we can decide to only apply LoRA to specific layers only.
As the number of parameters trained and applied are MUCH smaller than the actual model, the files can be as small as 8MB. This makes loading, applying, and transferring the learned models much easier and faster.
You can read the LoRA paper if you want to learn more and do a deeper dive into the topic.
LoRA in Stable Diffusion
One of the most interesting use cases of LoRA can be shown in image generation applications. Images have an inherent style that can be visually seen. Instead of training massive models to get specific styles of images out of models, users can now only train LoRA weights and use them with techniques like Dreambooth to achieve really good quality images with a lot of customizability.
LoRA weights can also be combined with other LoRA weights and be used in a weighted combination to generate images that carry multiple styles. You can find a ton of LoRA adapters online and load them into your models on CivitAI.
![[attachments/10b4844b2f0d5573e484549fb7e211e7_MD5.png]]
![[attachments/b78564009adb36f3326afcb4666b62fa_MD5.png]]
How is it different from LoRA
Working of QLoRA
4-bit Normal Float (NF4)
1. Normalization: The weights of the model are first normalized to have zero mean and unit variance. This ensures that the weights are distributed around zero and fall within a certain range.
2. Quantization: The normalized weights are then quantized to 4 bits. This involves mapping the original high-precision weights to a smaller set of low-precision values. In the case of NF4, the quantization levels are chosen to be evenly spaced in the range of the normalized weights.
3. Dequantization: During the forward pass and backpropagation, the quantized weights are dequantized back to full precision. This is done by mapping the 4-bit quantized values back to their original range. The dequantized weights are used in the computations, but they are stored in memory in their 4-bit quantized form.
![[attachments/e5f6d225d83b8c5e38c926cac1371754_MD5.png]]
There are “buckets” or “bins” of data where the data is quantized. Both the numbers 2 and 3 fall into the same quantile, 2. This quantization process allows you to use fewer numbers by “rounding off” to the nearest quantile.
Dequantization
![[attachments/312cfb680a97774e05e368cdd118e8e1_MD5.png]]
Paged Optimizers
![[attachments/572907cf1d8aec754c2c3aa4bfe11d0f_MD5.png]]
QLoRA finetuning with HuggingFace
To do QLoRA finetuning with HuggingFace, you need to install both the BitsandBytes library and the PEFT library. The BitsandBytes library takes care of the 4-bit quantization and the whole low-precision storage and high-precision compute part. The PEFT library will be used for the LoRA finetuning part.
import torch
from peft import prepare_model_for_kbit_training, LoraConfig, get_peft_model
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_id = "EleutherAI/gpt-neox-20b"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map={"":0})
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model) # prepares the whole model for kbit training
config = LoraConfig(
r=8,
lora_alpha=32,
target_modules=["query_key_value"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, config) # Now you get a model ready for QLoRA training
And then once again you can move to normal training using the HF trainer. Check out this colab notebook as a guide for QLoRA training.
IA3 (Infused Adapter by Inhibiting and Amplifying Inner Activations) is an adapter-based technique that is somewhat similar to LoRA. The goal of the authors was to replicate the advantages of ICL (in context learning or Few-Shot prompting) without the issues that come with it. ICL can get messy in terms of cost and inference as it requires prompting the model with examples. Longer length prompts require more time and computation to process. But ICL is perhaps the easiest way to get started working with models.
IA3 works by introducing rescaling vectors that target the activations of the model. A total of 3 vectors are introduced, lv, ik, and lff. These vectors target the value, keys in the attention layer, and the non-linear layer in the dense layers. These vectors are multiplied elementwise to the default values in the model. Once injected, these parameters are then learned during the training process, while the rest of the model remains frozen. These learned vectors essentially rescale or optimize the targeted pretrained model weights for the task at hand.
![[attachments/f312917c8b61b5d3fb21a2d91dccce21_MD5.png]]
So far this seems like a basic adapter type PEFT method. But that’s not all. The authors also use 3 loss terms to enhance the learning process. The 3 losses are LLM, LUL, and LLN. LLM is the standard cross-entropy loss, which increases the likelihood of generating the correct response. Then there is LUL which is Unlikelihood Loss. This loss term reduces the probability of incorrect outputs using Rank Classification. Finally, we have LLN, which is a length-normalized loss that applies a softmax cross-entropy loss to length-normalized log probabilities of all output choices. Multiple losses are used here to ensure faster and better learning of the model. Because we are trying learn using few-shot examples, these losses are necessary.
Now let’s talk about two very important concepts in IA3. Rank Classification and Length Normalization.
In Rank Classification a model is asked to rank a set of responses by their correctness. This is done by calculating the probability scores for the potential responses. The LUL is then used to reduce the probability of the wrong responses and as a result, increase the probability of the correct response. But with Rank classification, we face a critical problem, which is that the responses with fewer tokens will rank higher, because of how probability works. A smaller amount of generated tokens ensures a higher probability as the probability of every generated token is < 1. To fix this, the authors propose dividing the score of the response by the number of tokens in the response. Doing this will normalize the scores. One very important thing to note here is that normalization is done over log probabilities, not raw probabilities. Log probabilities are negative and between zero to one.
Example Usage
For the task of sequence classification, one can initialize the IA3 config for a Llama model as follows:
peft_config = IA3Config(
task_type=TaskType.SEQ_CLS, target_modules=["k_proj", "v_proj", "down_proj"], feedforward_modules=["down_proj"]
)
The P-tuning method aims to optimize the representation of the prompt which is passed to the model. In the P-Tuning paper, the authors emphasize how prompt engineering is a very strong technique when working with large language models. The p-Tuning method builds up on top of prompt engineering and tries to further improve the effectiveness of a good prompt.
P-Tuning works by creating a small encoder network for your prompt that creates a soft prompt for your passed prompt. To tune your LLM using P-tuning, you are supposed to create a prompt template that represents your prompt. And a context x which is used in the template to get label y. This is the approach mentioned in the paper. The tokens used for the prompt template are trainable and learnable parameters, these are called pseudo tokens. We also add a prompt encoder which then helps us update pseudo tokens to the specific task at hand. The prompt encoder is usually a bi-LSTM network that learns the optimal representation of the prompt for the model and then passes the representation to it. The LSTM network is attached to the original model. Only the encoder network and the pseudo tokens are trained here, the weights of the original network remain unaffected. Once the training is done, the LSTM head is discarded as we have the hi which can be used directly.
In short, the prompt encoder only changes the embeddings of the passed prompt to better represent the task, everything else remains unchanged.
![[attachments/e02a7e01f3d23d53d4e8c162bb26c016_MD5.png]]
Prefix tuning can be considered the next version of P-Tuning. The authors of P-Tuning published a paper on P-Tuning V-2 addressing the issues of P-Tuning. In this paper, they implemented the Prefix tuning introduced in this paper. Prefix tuning and P-Tuning do not have a lot of differences but can still lead to different results. Let’s dive into a deeper explanation.
![[attachments/54e4f19df6f95a1b3977f5bb16914ab7_MD5.png]]
In P-Tuning, we added learnable parameters only to the input embeddings but in Prefix Tuning we add them to all the layers of the network. This ensures that the model itself learns more about the task it is being finetuned on. We append learnable parameters to the prompt and to every layer activation in the transformer layers. The difference from P-Tuning is that instead of completely modifying the prompt embeddings, we only add very few learnable parameters at the start of the prompt at every layer.
Here’s a visual explanation:
![[attachments/45a49ef66509011c934928506792d6a6_MD5.png]]
At every layer in the transformer, we concatenate a soft prompt with the input which has learnable parameters. These learnable parameters are tuned using a very small MLP, only 2 fully connected layers. This is done because in the paper authors note that directly updating these prompt tokens is very sensitive to learning rate and initialization. The soft prompts increase the number of trainable parameters but substantially increase the learning ability of the model too. The MLP or fully connected layers can be dropped later as we only care about the soft prompts, which will be appended to the input sequences during inference and will guide the model.
![[attachments/532771a5359559f38952cb901250b876_MD5.png]]
Prompt tuning was one of the first papers to build upon the idea of fine tuning only with soft prompts. Prompt tuning is a very simple and easy-to-implement idea. It involves prepending a specific prompt to the input and using virtual tokens or new trainable tokens for that specific prompt. These new virtual tokens can be finetuned during the process to learn a better representation of the prompt. This means that the model is tuned to understand the prompt better. Here is a comparison of prompt tuning with full fine-tuning from the paper:
![[attachments/7f2111ba84fd5e66aeab59b513433b3c_MD5.png]]
Here you can see that full model tuning requires multiple copies of the model to exist if we want to use the model for multiple tasks. But with Prompt Tuning, you only need to store the learned virtual tokens of the prompt tokens. So for example, if you use a prompt like “Classify this tweet: {tweet}” the goal will be to learn new better embeddings for the prompt. And during inference, only these new embeddings will be used to generate the outputs. This allows the model to tune the prompt to help itself generate better outputs during inference.
Efficiency of Prompt Tuning
The biggest advantage of using prompt tuning is the small size of learned parameters. The files can be in KBs. As we can determine the dimension size and number of parameters to use for the new tokens, we can greatly control the number of parameters we are going to learn. In the paper, the authors show how even with a very small number of trainable tokens method performs really well. And the performance only goes up as bigger models are used. You can read the paper here.
![[attachments/0af0efc72fccd0d8b4a0b910bc65d23b_MD5.png]]
Another big advantage is that we can use the same model without any changes for multiple tasks, as the only thing being updated are the embeddings of the prompt tokens. Meaning you can use the same model for a tweet classification task and for a language generation task without any changes to the model itself, given the model is big and sophisticated enough to perform those tasks. But a big limitation is that the model itself doesn’t learn anything new. This is purely a prompt optimization task. This means if the model has never trained on a sentiment classification dataset, prompt tuning might not be of any help. It is very important to note that this method optimizes the prompts, not the model. So, if you cannot handcraft a hard prompt that can do the task relatively well, there is no use of trying to optimize for a soft prompt using prompt optimization techniques.
Hard Prompt & Soft Prompt
Hard Prompts can be seen as the idea of a defined prompt which is static, or at best a template. A generative AI application can also have multiple prompt templates at its disposal to make use of.
Hard prompts are manually handcrafted text prompts with discrete input tokens. ~ HuggingFace
Prompt templating allows for prompts to be stored, re-used, shared, and programmed. And generative prompts can be incorporated in programs for programming, storage and re-use.
Soft prompts are created during the process of prompt tuning.
Unlike hard prompts, soft prompts cannot be viewed and edited in text. Prompts consist of an embedding, a string of numbers, that derives knowledge from the larger model.
So for sure, a disadvantage is the lack of interpretability of soft prompts. The AI discovers prompts relevant for a specific task but can’t explain why it chose those embeddings. Like deep learning models themselves, soft prompts are opaque.
Soft prompts act as a substitute for additional training data.
![[attachments/14a3d5248a9c288142b533827f9686c8_MD5.png]]
The comparison between the Hard Prompting and Soft Prompting for T5
Now we have explored various PEFT techniques. Now the question becomes whether to use an additive technique like Adapter and LoRA or you use a Prompt based technique like P-Tuning and Prefix Tuning.
On comparing LoRA vs P-Tuning and Prefix Tuning, one can say for sure LoRA is the best strategy in terms of getting the most out of the model. But it might not be the most efficient based on your needs. If you want to train the model on a much different task than what it has been trained on, LoRA is without a doubt the best strategy for tuning the model efficiently. But if your task is more or less already understood by the model, but the challenge is to properly prompt the model, then you should use Prompt Tuning techniques. Prompt Tuning doesn’t modify many parameters in the model and mainly focuses on the passed prompt instead.
One important point to note is that LoRA decomposes the weight updation matrix into smaller rank matrices and uses them to update the weights of the model. Even though trainable parameters are low, LoRA updates all the parameters in the targeted parts of the neural network. Whereas in Prompt Tuning techniques, a few trainable parameters are added to the model, this usually helps the model adjust to and understand the task better but does not help the model learn new properties well.
PEFT, Parameter Efficient Fine Tuning, is proposed as an alternative to full Finetuning. For most of the tasks, it has already been shown in papers that PEFT techniques like LoRA are comparable to full finetuning, if not better. But, if the new task you want the model to adapt to is completely different from the tasks the model has been trained on, PEFT might not be enough for you. The limited number of trainable parameters can result in major issues in such scenarios.
If we are trying to build a code generation model using a text-based model like LLaMA or Alpaca, we should probably consider fine-tuning the whole model instead of tuning the model using LoRA. This is because the task is too different from what the model already knows and has been trained on. Another good example of such a task is training a model, which only understands English, to generate text in the Nepali language.
When using a large language model (LLM) for inference, we can often configure various parameters to fine-tune its output and performance. Here’s a breakdown of some key parameters:
1. Top-k Sampling:
2. Temperature:
3. Top-P (Nucleus) Sampling:
4. Maximum Length:
5. Context Prompting:
6. Repetition Penalty:
7. Sampling:
![[attachments/404b28a6ea5fc609eaffc999013c5b0b_MD5.png]]
Credits : Abhinav Kimothi
![[attachments/4ff28ee95194357c3f656ac3dc24b6aa_MD5.png]]
Credits : Abhinav Kimothi
Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
What is a Prompt?
The natural language instruction in which we interact with an LLM is called a Prompt. The construction of prompts is called Prompt Engineering.
![[attachments/107c0648e15b44dee5732c10a584607a_MD5.png]]
The inferencing that an LLM does and completes the instruction given in the prompt is called In-Context Learning.
The ability of the LLM to respond to the instruction in the prompt without any example is called Zero-Shot Learning.
When a single example is provided, it’s called One-Shot Learning.
If more than one example is provided, it’s called Few-Shot Learning.
Context Window, or the maximum number of tokens that an LLM can provide and inference on, is critical in the Zero/One/Few Shot Learning.
![[attachments/2ddd41752b19a0bc073bf51a94160966_MD5.png]]
Credits: Abhinav Kimothi
![[attachments/d4a7e1e73eff8c11f3af13cc9c6056e1_MD5.png]]
Chain-of-thought (CoT) prompting (Wei et al. 2022) generates a sequence of short sentences to describe reasoning logics step by step, known as reasoning chains or rationales, to eventually lead to the final answer. The benefit of CoT is more pronounced for complicated reasoning tasks while using large models (e.g. with more than 50B parameters). Simple tasks only benefit slightly from CoT prompting.
Gao et al., (2022)(opens in a new tab) presents a method that uses LLMs to read natural language problems and generate programs as the intermediate reasoning steps. Coined, program-aided language models (PAL), it differs from chain-of-thought prompting in that instead of using free-form text to obtain solution it offloads the solution step to a programmatic runtime such as a Python interpreter.
![[attachments/9f1d82b7d654950c5f36fc0edb47b1cd_MD5.png]]
ReAct is inspired by the synergies between “acting” and “reasoning” which allow humans to learn new tasks and make decisions or reasoning.
CoT’s lack of access to the external world or inability to update its knowledge can lead to issues like fact hallucination and error propagation.
ReAct is a general paradigm that combines reasoning and acting with LLMs. ReAct prompts LLMs to generate verbal reasoning traces and actions for a task. This allows the system to perform dynamic reasoning to create, maintain, and adjust plans for acting while also enabling interaction to external environments (e.g., Wikipedia) to incorporate additional information into the reasoning. The figure below shows an example of ReAct and the different steps involved to perform question answering.
![[attachments/4d4376865e9c41ca7dab3aa1ef51a7f6_MD5.png]]
![[attachments/b319fc5416f2829ba79036dae7c2ebc0_MD5.png]]
![[attachments/8635514f4dc78eaec56055df1bfc9f01_MD5.png]]
Model compression methods: (a) pruning, (b) quantization, and © knowledge distillation
Model Quantization is a technique used to reduce the size of large neural networks, including large language models (LLMs) by modifying the precision of their weights. LLM Quantization is enabled thanks to empiric results showing that while some operations related to neural network training and inference must leverage high precision, in some cases it’s possible to use significantly lower precision (float16 for example) reducing the overall size of the model, allowing it to be run using less powerful hardware with an acceptable reduction of its capabilities and accuracy.
![[attachments/9d07a95db720f4f3dae1d6c611145141_MD5.png]]
Trend of model sizes | AWS reinvent
Precision Trade-off
![[attachments/d46a267882ba3468be33c85914943d30_MD5.png]]
Tensors | Source
Generally, using high precision in neural networks is associated with better accuracy and more stable training(read more about it here). Using high precision is also more computationally expensive as it requires more hardware and more expensive hardware. Research mostly done by Google and NVIDIA regarding the possibility of using lower precision for some neural network operations showed that lower precision can be leveraged for some training and inference operations.
Aside from the research, both companies developed hardware and frameworks to support lower precision operations. For example, the NVIDIA T4 accelerators are lower precision GPUs with Tensor Cores technology that is significantly more efficient than that of the K80. Google’s TPUs introduced the concept of bfloat16, a special primitive data type optimized for neural networks. The fundamental idea behind lower precision is that neural networks don’t always need to use ALL the range that 64-bit floats to allow them to perform well.
![[attachments/3587b15abf17c66126a5b71a279ff50f_MD5.png]]
The bfloat16 numerical format | Google
As neural networks became increasingly large, the importance of leveraging lower precision had a significant impact on the ability to use them. With LLMs, this became even more crucial.
For reference, an A100 GPU by Nvidia has 80GB of memory in its most advanced version. In the table below we can see that the LLama2–70B model requires 138 GB of memory approximately, meaning that to host it, we will need multiple A100s. Distributing models over multiple GPUs means paying for more GPUs as well as overhead infrastructure. A quantized version, on the other hand, requires around 40 GB of memory, therefore it can fit easily into one A100, reducing the cost of inference significantly. This example doesn’t even mention the fact that within the single A100, using quantized models would result in faster execution of most of the individual computation operations.
![[attachments/eba8c094a5a793e009beaf7cf329bd02_MD5.png]]
Example of 4-bit quantization using llama.cpp, size may vary slightly depending on the method
How does quantization shrink models?
Quantization significantly decreases the model’s size by reducing the number of bits required for each model weight. A typical scenario would be the reduction of the weights from FP16 (16-bit Floating-point) to INT4 (4-bit Integer). This allows for models to run on cheaper hardware and/or with higher speed. By reducing the precision of the weights, the overall quality of the LLM can also suffer some impact.
Studies show that this impact varies depending on the techniques used and that larger models suffer less from change in precision. Larger models (over ~70B) can maintain their capacities even when converted to 4-bit, with some techniques such as the NF4 suggesting no impact on their performance. Therefore, 4-bit appears to be the best compromise between performance and size/speed for these larger models, while 6 or 8-bit might be better for smaller models.
Types of LLM Quantization
It’s possible to divide the techniques of obtaining quantized models into two:
![[attachments/0801095df62f4116105e465070f47ec8_MD5.jpg]]
This post will only focus on PTQ strategies and the key distinctions between them.
Larger Quantized Model vs Smaller Non-Quantized
Acknowledging that reducing the precision will reduce the accuracy of the model, should you prefer a smaller full-precision model or a larger quantized model with a comparable inference cost? Although the ideal choice might vary due to diverse factors, recent research by Meta offers some insightful guidelines.
While we expect that reducing the precision would result in the reduction of the accuracy, Meta researchers have demonstrated that in some cases, not only does the quantized model demonstrate superior performance, but it also allows for reduced latency and enhanced throughput. The same trend can be observed when comparing an 8-bit 13B model with a 16-bit 7B model. In essence, when comparing models with similar inference costs, the larger quantized models can outperform their smaller, non-quantized counterparts. This advantage becomes even more pronounced with larger networks, as they exhibit a smaller quality loss when quantized.
Where to find already Quantized models?
Fortunately, it is possible to find many versions of models already quantized using GPTQ (some compatible with ExLLama), NF4 or GGML on the Hugging Face Hub. A quick glance would reveal that a substantial chunk of these models has been quantified by TheBloke, an influential and respected figure in the LLM community. This user has published several models with different types of quantization methods so one can choose to use the best fit for each particular use-case.
To easily experiment with these models open up a Google Colab and make sure you change your runtime to a GPU (a free one is available for use). Start by installing the transformers library maintained by Hugging Face and all necessary libraries. Since we will be using a model quantized using Auto-GPTQ the respective libraries will also be required:
!pip install transformers
!pip install accelerate
!pip install optimum
!pip install auto-gptq
You might need to restart the runtime so that the installs are available. Then simply load the already quantized model, in this case we are loading a Llama-2–7B-Chat model previously quantized using Auto-GPTQ, as shown below:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "TheBloke/Llama-2-7b-Chat-GPTQ"
tokenizer = AutoTokenizer.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
Quantizing a LLM
As highlighted earlier, a plethora of quantized models already reside on the Hugging Face Hub, eliminating the necessity to compress a model personally in many scenarios. However, in same cases you may want to use models which are not yet quantized or you may want to compress the model yourself. This can be achieved by using a dataset tailored to your specific domain.
To demonstrate how to easily quantize a model using AutoGPTQ along with the Transformers library, we employed a streamlined variant of the AutoGPTQ interface found in Optimum — Hugging Face’s solution for refining training and inference:
from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig
model_id = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_id)
quantization_config = GPTQConfig(bits=4, dataset = "c4", tokenizer=tokenizer)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", quantization_config=quantization_config)
Model compression can be time-consuming. For instance, a 175B model demands at least 4 GPU hours, especially with expansive datasets like “c4”. Notably, the number of bits in the quantization process or the dataset can be easily modified by the parameters of GPTQConfig. Changing the dataset will impact how the quantization is done so, if possible, use a dataset that resembles data seen in inference to maximize performance.
Quantization Techniques
![[attachments/cdf666aff3d3fdf656ca277239b3d739_MD5.png]]
Several state-of-the-art methods have emerged in the arena of model quantization. Let’s delve into some prominent ones:
Many quantization libraries support several different quantization strategies (e.g. 4-bit, 5-bit, and 8-bit quantization), each of which offers different trade-offs between efficiency and performance.
![[attachments/2beddc09f9a31d9cbbe020ebe30843bc_MD5.png]]
Knowledge Distillation (KD; Hinton et al. 2015, Gou et al. 2020) is a straightforward way to build a smaller, cheaper model (“student model”) to speed up inference by transferring skills from a pre-trained expensive model (“teacher model”) into the student. There is no much restriction on how the student architecture should be constructed, except for a matched output space with the teacher to construct a proper learning objective.
![[attachments/f7fdd3d5d533592401a8358921a43745_MD5.png]]
The teacher model is already fine-tuned on the training data. So, the probability distribution likely closely matches the ground truth data and won’t have many variations in tokens.
So, when temperature > 1 then, probability distribution becomes broader.
T > 1 => Teacher’s output –> soft labels and Student’s output –> soft predictions
T = 1 => Teacher’s output –> hard labels and Student’s output –> hard predictions
Distillation is not as effective for generative decoder models. Its effective for encoder only models, such as BERT, which have a lot of representation redundancy.
Network pruning is to reduce the model size by trimming unimportant model weights or connections while the model capacity remains. It may or may not require re-training. Pruning can be unstructured or structured.
A routine workflow to construct a pruned network has three steps:
The idea of discovering a sparse structure within a dense model via network pruning while the sparse network can still maintain similar performance is motivated by Lottery Ticket Hypothesis (LTH): A randomly initialized, dense, feed-forward network contains a pool of subnetworks and among them, only a subset (a sparse network) are “winning tickets” which can achieve the optimal performance when trained in isolation.
We will continue with the Evaluation of LLMs in our next blog.
In this blog we explored the text generation part of the Retrieval-Augmented Generation (RAG) application, emphasizing the use of Large Language Models (LLM). It covers language modeling, pre-training challenges, quantization techniques, distributed training methods, and fine-tuning for LLMs. Parameter Efficient Fine-Tuning (PEFT) techniques, including Adapters, LoRA, and QLoRA, are discussed. Prompting strategies, model compression methods like pruning and quantization, and various quantization techniques (GPTQ, NF4, GGML) are introduced. The blog concludes with insights into distillation and pruning for model size reduction.
If this guide has enhanced your understanding of Python and Machine Learning:
[
Large Language Models
](https://medium.com/tag/large-language-models?source=post_page-----ea8bd982bdee---------------large_language_models-----------------)
[
Fine Tuning
](https://medium.com/tag/fine-tuning?source=post_page-----ea8bd982bdee---------------fine_tuning-----------------)
[
Training
](https://medium.com/tag/training?source=post_page-----ea8bd982bdee---------------training-----------------)
[
Prompt Engineering
](https://medium.com/tag/prompt-engineering?source=post_page-----ea8bd982bdee---------------prompt_engineering-----------------)
[
Optimization
](https://medium.com/tag/optimization?source=post_page-----ea8bd982bdee---------------optimization-----------------)
[
](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fvote%2Fp%2Fea8bd982bdee&operation=register&redirect=https%3A%2F%2Fmedium.com%2F%40vipra_singh%2Fbuilding-llm-applications-large-language-models-part-6-ea8bd982bdee&user=Vipra+Singh&userId=63861d815c3&source=-----ea8bd982bdee---------------------clap_footer-----------)
213
1
[
![[attachments/99bec3be697c3349d996f0509a76e34c_MD5.jpg]]
](https://medium.com/@vipra_singh?source=post_page-----ea8bd982bdee--------------------------------)
[
](https://medium.com/@vipra_singh?source=post_page-----ea8bd982bdee--------------------------------)
Follow
![[attachments/d212faf3dc6b2b88b87af2c73da639b8_MD5.png]]
[
![[attachments/2e71ce76f561813027f91b93d64b07aa_MD5.jpg]]
](https://medium.com/@vipra_singh?source=author_recirc-----ea8bd982bdee----0---------------------748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
[
Vipra Singh
](https://medium.com/@vipra_singh?source=author_recirc-----ea8bd982bdee----0---------------------748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
[
](https://medium.com/@vipra_singh/building-llm-applications-retrieval-search-part-5-c83a7004037d?source=author_recirc-----ea8bd982bdee----0---------------------748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
25 min read·Jan 28, 2024
[
](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fvote%2Fp%2Fc83a7004037d&operation=register&redirect=https%3A%2F%2Fmedium.com%2F%40vipra_singh%2Fbuilding-llm-applications-retrieval-search-part-5-c83a7004037d&user=Vipra+Singh&userId=63861d815c3&source=-----c83a7004037d----0-----------------clap_footer----748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
202
![[attachments/86e6ccc7927aed164eeae60f04cffe18_MD5.png]]
[
![[attachments/2e71ce76f561813027f91b93d64b07aa_MD5.jpg]]
](https://medium.com/@vipra_singh?source=author_recirc-----ea8bd982bdee----1---------------------748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
[
Vipra Singh
](https://medium.com/@vipra_singh?source=author_recirc-----ea8bd982bdee----1---------------------748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
[
](https://medium.com/@vipra_singh/building-llm-applications-data-preparation-part-2-b7306d224245?source=author_recirc-----ea8bd982bdee----1---------------------748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
13 min read·Jan 9, 2024
[
](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fvote%2Fp%2Fb7306d224245&operation=register&redirect=https%3A%2F%2Fmedium.com%2F%40vipra_singh%2Fbuilding-llm-applications-data-preparation-part-2-b7306d224245&user=Vipra+Singh&userId=63861d815c3&source=-----b7306d224245----1-----------------clap_footer----748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
100
![[attachments/fbdc0eb9e433a5cd5cd846aaf5f1617a_MD5.jpg]]
[
![[attachments/2e71ce76f561813027f91b93d64b07aa_MD5.jpg]]
](https://medium.com/@vipra_singh?source=author_recirc-----ea8bd982bdee----2---------------------748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
[
Vipra Singh
](https://medium.com/@vipra_singh?source=author_recirc-----ea8bd982bdee----2---------------------748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
[
](https://medium.com/@vipra_singh/building-llm-applications-introduction-part-1-1c90294b155b?source=author_recirc-----ea8bd982bdee----2---------------------748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
5 min read·Jan 9, 2024
[
](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fvote%2Fp%2F1c90294b155b&operation=register&redirect=https%3A%2F%2Fmedium.com%2F%40vipra_singh%2Fbuilding-llm-applications-introduction-part-1-1c90294b155b&user=Vipra+Singh&userId=63861d815c3&source=-----1c90294b155b----2-----------------clap_footer----748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
302
[
1
](https://medium.com/@vipra_singh/building-llm-applications-introduction-part-1-1c90294b155b?responsesOpen=true&sortBy=REVERSE_CHRON&source=author_recirc-----ea8bd982bdee----2---------------------748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
![[attachments/e83953557f3b100ab89257c69898c3ab_MD5.png]]
[
![[attachments/2e71ce76f561813027f91b93d64b07aa_MD5.jpg]]
](https://medium.com/@vipra_singh?source=author_recirc-----ea8bd982bdee----3---------------------748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
[
Vipra Singh
](https://medium.com/@vipra_singh?source=author_recirc-----ea8bd982bdee----3---------------------748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
[
](https://medium.com/@vipra_singh/building-llm-applications-sentence-transformers-part-3-a9e2529f99c1?source=author_recirc-----ea8bd982bdee----3---------------------748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
15 min read·Jan 13, 2024
[
](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fvote%2Fp%2Fa9e2529f99c1&operation=register&redirect=https%3A%2F%2Fmedium.com%2F%40vipra_singh%2Fbuilding-llm-applications-sentence-transformers-part-3-a9e2529f99c1&user=Vipra+Singh&userId=63861d815c3&source=-----a9e2529f99c1----3-----------------clap_footer----748e55f2_1408_4ff4_9349_a0f8237cb81e-------)
69
[
See all from Vipra Singh
](https://medium.com/@vipra_singh?source=post_page-----ea8bd982bdee--------------------------------)
![[attachments/e4dbdbf968c37c7f824a804e31b98551_MD5.png]]
[
![[attachments/2e71ce76f561813027f91b93d64b07aa_MD5.jpg]]
](https://medium.com/@vipra_singh?source=read_next_recirc-----ea8bd982bdee----0---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
[
Vipra Singh
](https://medium.com/@vipra_singh?source=read_next_recirc-----ea8bd982bdee----0---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
[
](https://medium.com/@vipra_singh/building-llm-applications-vector-database-part-4-2bb29e7c798d?source=read_next_recirc-----ea8bd982bdee----0---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
21 min read·Jan 20, 2024
[
](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fvote%2Fp%2F2bb29e7c798d&operation=register&redirect=https%3A%2F%2Fmedium.com%2F%40vipra_singh%2Fbuilding-llm-applications-vector-database-part-4-2bb29e7c798d&user=Vipra+Singh&userId=63861d815c3&source=-----2bb29e7c798d----0-----------------clap_footer----f985b5e4_e973_45fe_b240_596a819923ba-------)
69
[
1
](https://medium.com/@vipra_singh/building-llm-applications-vector-database-part-4-2bb29e7c798d?responsesOpen=true&sortBy=REVERSE_CHRON&source=read_next_recirc-----ea8bd982bdee----0---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
![[attachments/025cd1d5ba7bee3bdc4d82f3c5fe153e_MD5.png]]
[
![[attachments/dd35a2d76763bb73827c791c427a2998_MD5.jpg]]
](https://medium.com/@alexhonchar?source=read_next_recirc-----ea8bd982bdee----1---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
[
Alex Honchar
](https://medium.com/@alexhonchar?source=read_next_recirc-----ea8bd982bdee----1---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
in
[
Towards Data Science
](https://medium.com/towards-data-science?source=read_next_recirc-----ea8bd982bdee----1---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
[
](https://medium.com/towards-data-science/intro-to-llm-agents-with-langchain-when-rag-is-not-enough-7d8c08145834?source=read_next_recirc-----ea8bd982bdee----1---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
7 min read·Mar 15, 2024
[
](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fvote%2Ftowards-data-science%2F7d8c08145834&operation=register&redirect=https%3A%2F%2Ftowardsdatascience.com%2Fintro-to-llm-agents-with-langchain-when-rag-is-not-enough-7d8c08145834&user=Alex+Honchar&userId=1b1fb9c5ea70&source=-----7d8c08145834----1-----------------clap_footer----f985b5e4_e973_45fe_b240_596a819923ba-------)
1.6K
[
8
](https://medium.com/towards-data-science/intro-to-llm-agents-with-langchain-when-rag-is-not-enough-7d8c08145834?responsesOpen=true&sortBy=REVERSE_CHRON&source=read_next_recirc-----ea8bd982bdee----1---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
[
![[attachments/83e8204c9c730a58a372291ea0a3c24e_MD5.jpg]]
![[attachments/a3c885b214282cdbd44fb4a85977a3b7_MD5.jpg]]
![[attachments/ad6795a1b2390d1a9b9a0459418d7273_MD5.jpg]]
1349 stories·828 saves
](https://medium.com/@AMGAS14/list/natural-language-processing-0a856388a93a?source=read_next_recirc-----ea8bd982bdee--------------------------------)
[
![[attachments/e51fec6e6bd21384fa576a3e3e01b39f_MD5.png]]
![[attachments/5358252b4425233925cda72dff56ad22_MD5.png]]
![[attachments/718c0c90a43a88bc2e79f07cdd2b6405_MD5.png]]
47 stories·1365 saves
](https://medium.com/@nicholas.michael.janulewicz/list/chatgpt-prompts-b4c47b8e12ee?source=read_next_recirc-----ea8bd982bdee--------------------------------)
[
![[attachments/879bef54ff23acea5c2b4d68c6900ea8_MD5.jpg]]
![[attachments/047cdfb91d96eb4a7bede13a11078c40_MD5.jpg]]
![[attachments/2d2d2d6acfc0125c866bcd331c896c36_MD5.jpg]]
6 stories·399 saves
](https://medium.com/@MediumStaff/list/ai-regulation-dfa78dfd2438?source=read_next_recirc-----ea8bd982bdee--------------------------------)
[
![[attachments/6afbff81cb58958dafde7bc3b43429a4_MD5.jpg]]
![[attachments/c04cda04ecb41fe3562067279a042d54_MD5.jpg]]
![[attachments/87957e4848ddaf4a5adb6d28c81e1ace_MD5.png]]
52 stories·902 saves
](https://tomsmith585.medium.com/list/generative-ai-recommended-reading-508b0743c247?source=read_next_recirc-----ea8bd982bdee--------------------------------)
![[attachments/4def8c4cb2042d44a45e266d8814da68_MD5.png]]
[
![[attachments/14bf4e47015840e576d18bddf396fead_MD5.jpg]]
](https://medium.com/@dassum?source=read_next_recirc-----ea8bd982bdee----0---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
[
Suman Das
](https://medium.com/@dassum?source=read_next_recirc-----ea8bd982bdee----0---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
[
](https://medium.com/@dassum/fine-tune-large-language-model-llm-on-a-custom-dataset-with-qlora-fb60abdeba07?source=read_next_recirc-----ea8bd982bdee----0---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
15 min read·Jan 25, 2024
[
](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fvote%2Fp%2Ffb60abdeba07&operation=register&redirect=https%3A%2F%2Fdassum.medium.com%2Ffine-tune-large-language-model-llm-on-a-custom-dataset-with-qlora-fb60abdeba07&user=Suman+Das&userId=54106b197f0a&source=-----fb60abdeba07----0-----------------clap_footer----f985b5e4_e973_45fe_b240_596a819923ba-------)
931
[
10
](https://medium.com/@dassum/fine-tune-large-language-model-llm-on-a-custom-dataset-with-qlora-fb60abdeba07?responsesOpen=true&sortBy=REVERSE_CHRON&source=read_next_recirc-----ea8bd982bdee----0---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
![[attachments/28ceea0e3a27706b5389f30a570de8b5_MD5.jpg]]
[
![[attachments/1434b11e01176d725dc4ee6b69fa9a1b_MD5.png]]
](https://medium.com/@shujuanhuang?source=read_next_recirc-----ea8bd982bdee----1---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
[
Jane Huang
](https://medium.com/@shujuanhuang?source=read_next_recirc-----ea8bd982bdee----1---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
in
[
Data Science at Microsoft
](https://medium.com/data-science-at-microsoft?source=read_next_recirc-----ea8bd982bdee----1---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
[
](https://medium.com/data-science-at-microsoft/evaluating-llm-systems-metrics-challenges-and-best-practices-664ac25be7e5?source=read_next_recirc-----ea8bd982bdee----1---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
11 min read·Mar 5, 2024
[
](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fvote%2Fdata-science-at-microsoft%2F664ac25be7e5&operation=register&redirect=https%3A%2F%2Fmedium.com%2Fdata-science-at-microsoft%2Fevaluating-llm-systems-metrics-challenges-and-best-practices-664ac25be7e5&user=Jane+Huang&userId=a62b1d2f2c49&source=-----664ac25be7e5----1-----------------clap_footer----f985b5e4_e973_45fe_b240_596a819923ba-------)
613
[
8
](https://medium.com/data-science-at-microsoft/evaluating-llm-systems-metrics-challenges-and-best-practices-664ac25be7e5?responsesOpen=true&sortBy=REVERSE_CHRON&source=read_next_recirc-----ea8bd982bdee----1---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
![[attachments/7593362edbf0a0689b3ccda3b4639d82_MD5.png]]
[
![[attachments/bfb9af9381483d93d9872b927327046e_MD5.jpg]]
](https://medium.com/@rohanbalkondekar?source=read_next_recirc-----ea8bd982bdee----2---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
[
Rohan Balkondekar
](https://medium.com/@rohanbalkondekar?source=read_next_recirc-----ea8bd982bdee----2---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
[
](https://medium.com/@rohanbalkondekar/build-your-own-devin-8d8794266315?source=read_next_recirc-----ea8bd982bdee----2---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
4 min read·Mar 16, 2024
[
](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fvote%2Fp%2F8d8794266315&operation=register&redirect=https%3A%2F%2Fmedium.com%2F%40rohanbalkondekar%2Fbuild-your-own-devin-8d8794266315&user=Rohan+Balkondekar&userId=82d3834a9510&source=-----8d8794266315----2-----------------clap_footer----f985b5e4_e973_45fe_b240_596a819923ba-------)
114
[
3
](https://medium.com/@rohanbalkondekar/build-your-own-devin-8d8794266315?responsesOpen=true&sortBy=REVERSE_CHRON&source=read_next_recirc-----ea8bd982bdee----2---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
![[attachments/1cb85277b4a8b2626533ee9c4eb13342_MD5.png]]
[
![[attachments/4f8a67b9852cafdbcfec9980c8723792_MD5.jpg]]
](https://medium.com/@vtiya?source=read_next_recirc-----ea8bd982bdee----3---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
[
Tiya Vaj
](https://medium.com/@vtiya?source=read_next_recirc-----ea8bd982bdee----3---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
[
](https://medium.com/@vtiya/key-components-of-llms-484b4c145a1b?source=read_next_recirc-----ea8bd982bdee----3---------------------f985b5e4_e973_45fe_b240_596a819923ba-------)
2 min read·4 days ago
[
](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fvote%2Fp%2F484b4c145a1b&operation=register&redirect=https%3A%2F%2Fvtiya.medium.com%2Fkey-components-of-llms-484b4c145a1b&user=Tiya+Vaj&userId=3bf562e3af45&source=-----484b4c145a1b----3-----------------clap_footer----f985b5e4_e973_45fe_b240_596a819923ba-------)
1
[
See more recommendations
](https://medium.com/?source=post_page-----ea8bd982bdee--------------------------------)
[
Help
](https://help.medium.com/hc/en-us?source=post_page-----ea8bd982bdee--------------------------------)
[
Status
](https://medium.statuspage.io/?source=post_page-----ea8bd982bdee--------------------------------)
[
About
](https://medium.com/about?autoplay=1&source=post_page-----ea8bd982bdee--------------------------------)
[
Careers
](https://medium.com/jobs-at-medium/work-at-medium-959d1a85284e?source=post_page-----ea8bd982bdee--------------------------------)
[
Blog
](https://blog.medium.com/?source=post_page-----ea8bd982bdee--------------------------------)
[
Privacy
](https://policy.medium.com/medium-privacy-policy-f03bf92035c9?source=post_page-----ea8bd982bdee--------------------------------)
[
Terms
](https://policy.medium.com/medium-terms-of-service-9db0094a1e0f?source=post_page-----ea8bd982bdee--------------------------------)
[
Text to speech
](https://speechify.com/medium?source=post_page-----ea8bd982bdee--------------------------------)
[
Teams
](https://medium.com/business?source=post_page-----ea8bd982bdee--------------------------------)