You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/tutorials/posttraining/full_finetuning.md
+66-77Lines changed: 66 additions & 77 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,111 +14,100 @@
14
14
limitations under the License.
15
15
-->
16
16
17
-
(full-finetuning)=
18
17
# Full fine-tuning on single-host TPUs
19
18
20
-
MaxText can perform pre-training and full finetuning. To perform full fine
21
-
tuning, you need to pass the checkpoint to the training script.
19
+
Full Fine-Tuning (FFT) is a common technique used in post-training to adapt a pre-trained Large Language Model (LLM) to a specific downstream task or dataset. In this process, all the parameters (weights) of the original model are "unfrozen" and updated during training on the new task-specific data. This allows the entire model to adjust and specialize, potentially leading to the best performance on the new task.
22
20
23
-
Following is the parameter to assign a checkpoint to the training script.
21
+
This tutorial demonstrates step-by-step instructions for setting up the environment, convert checkpoint and then training the model on a Hugging Face dataset using FFT.
24
22
25
-
-`load_parameters_path`: Path to the checkpoint directory
23
+
In this tutorial we use a single host TPU VM such as `v6e-8/v5p-8`. Let's get started!
26
24
27
-
The high level steps involve:
28
-
- Converting the model checkpoints to MaxText formatted checkpoints
29
-
- Preparing the dataset so that data can be fed into the training script.
30
-
MaxText provides sample pipelines to load the data via tf.data or Pygrain from
31
-
a disk or gcs bucket. Or it can also input data directly from the hugging face
32
-
dataset.
33
-
- Running the training script with the checkpoint
34
-
- Note: Training parameters may require adjustment to align the model with the specific TPU or GPU topology and achieve optimal performance.
### Meta's PyTorch checkpoint to Maxtext (Orbax) checkpoint
41
-
42
-
The conversion scripts for LLama work with Meta’s original checkpoints and not with HuggingFace Checkpoint.
43
-
44
-
#### Pre-requisite
45
-
- Download the Meta format checkpoints.
46
-
47
-
Option 1: Download the checkpoint from Meta (https://llama.meta.com/llama-downloads/) in your local directory.
48
-
49
-
Option 2: Download the checkpoint from a GCS Bucket to a local directoty with command ```gcloud storage cp -r <GCS path for META format checkpoint> <local/path>``` .
50
-
51
-
- Install Torch CPU because TPU or GPU is not required in this convertion script.
export BASE_OUTPUT_DIRECTORY=<output directory to store run logs># e.g., gs://my-bucket/my-output-directory
51
+
```
56
52
57
-
```bash
58
-
export CONVERTED_CHECKPOINT_PATH=<GCS path for saving converted checkpoint># e.g., gs://my-bucket/my-model-checkpoint
59
-
export LOCAL_META_CHECKPOINT_PATH=<local path for META checkpoint># e.g., /local/meta-ckpt
60
-
```
61
-
#### Running the weight conversion script
53
+
## Hugging Face checkpoint to Maxtext checkpoint
54
+
This section explains how to prepare your model checkpoint for use with MaxText. You have two options: using an existing MaxText checkpoint or converting a Hugging Face checkpoint.
62
55
63
-
Using llama-7b as an example:
56
+
### Option 1: Using an existing MaxText checkpoint
57
+
If you already have a MaxText-compatible model checkpoint, simply set the following environment variable and move on to the next section.
export MODEL_CKPT_PATH=<gcs path for MaxText checkpoint># e.g., gs://my-bucket/my-model-checkpoint/0/items
70
61
```
71
-
Note:
72
62
73
-
The conversion scripts do not use accelerators but need large host memory to perform the conversion.
63
+
### Option 2: Converting a Hugging Face checkpoint
64
+
If your model checkpoint is from Hugging Face, you need to run a conversion script to make it MaxText-compatible.
74
65
75
-
- The base model checkpoints should be in the format `{name}.{chkpt_idx}.pth`
76
-
- For example: `mistral-7b.00.pth`
77
-
- For large size model (e.g. 70B model), this script requires large memory VM.
78
-
- The script load and save weights in a single pass.
66
+
1.**Set the Output Path:** First, define where the converted MaxText checkpoint will be saved. For example:
79
67
80
-
### MaxText checkpoint to Hugging Face
81
-
82
-
Post finetuning or pre-training, MaxText also provides scripts to convert MaxText format weights back to [Hugging Face](https://github.com/AI-Hypercomputer/maxtext/blob/main/src/MaxText/utils/ckpt_scripts/llama_mistral_mixtral_orbax_to_hf.py).
#### Sample for coverting Maxtext format weight to Hugging Face format
72
+
2.**Run the Conversion Script:** Execute the following command that downloads the specified Hugging Face model and converts its weights into the MaxText format. The conversion script only supports official versions of models from Hugging Face. To see the specific models and versions currently supported for conversion, please refer to the `HF_IDS` dictionary in the MaxText utility file [here](https://github.com/AI-Hypercomputer/maxtext/blob/main/src/MaxText/utils/ckpt_conversion/utils/utils.py).
85
73
86
-
- Setup Environment Variables
74
+
```sh
75
+
python3 -m pip install torch --index-url https://download.pytorch.org/whl/cpu # Ensure torch is installed for the conversion script
87
76
88
-
```bash
89
-
export BASE_OUTPUT_DIRECTORY=<output directory to store run logs># e.g., gs://my-bucket/my-output-directory
90
-
export PATH_TO_CHECKPOINT=<GCS path for saving converted checkpoint>/0/items # e.g., ${CONVERTED_CHECKPOINT_PATH}/0/items
91
-
export HF_MODLE_PATH=<local path for hf># e.g., /local/convert_ckp
Use the `to_huggingface.py` script to convert a MaxText checkpoint into the Hugging Face format. This is useful for sharing your models or integrating them with the Hugging Face ecosystem.
MaxText provides examples to work with [Common Crawl](https://commoncrawl.org/). The dataset is available in TFRecords format in a cloud bucket. MaxText provides scripts to copy the dataset to a Google Cloud Storage Bucket.
111
100
112
-
#####Common Crawl (c4) dataset setup
101
+
### Common Crawl (c4) dataset setup
113
102
114
103
Run these steps once per project prior to any local development or cluster experiments.
115
104
116
105
1. Create two gcs buckets in your project, one for downloading and retrieving the dataset and the other for storing the logs.
117
-
2. Download the dataset in your gcs bucket
106
+
2. Download the dataset in your gcs bucket.
118
107
119
-
MaxText assumes these GCS buckets are created in the same project and that it has permissions to read and write from them:
108
+
MaxText assumes these GCS buckets are created in the same project and that it has permissions to read and write from them.
120
109
121
-
```bash
110
+
```sh
122
111
export PROJECT=<Google Cloud Project ID>
123
112
export DATASET_GCS_BUCKET=<GCS for dataset># e.g., gs://my-bucket/my-dataset
0 commit comments