@@ -41,8 +41,8 @@ MaxText checkpoints are in their own format. You can see the format in the scrip
4141
4242The conversion scripts for LLama work with Meta’s original checkpoints and not with HuggingFace Checkpoint.
4343
44- #### Pre-requist
45- - Download the Meta format checkpoints
44+ #### Pre-requisite
45+ - Download the Meta format checkpoints.
4646
4747 Option 1: Download the checkpoint from Meta (https://llama.meta.com/llama-downloads/ ) in your local directory.
4848
@@ -52,15 +52,15 @@ The conversion scripts for LLama work with Meta’s original checkpoints and not
5252
5353 ``` python3 -m pip install torch --index-url https://download.pytorch.org/whl/cpu ```
5454
55- - Setup Environment Variables
55+ - Setup Environment Variables.
5656
5757 ``` bash
5858 export CONVERTED_CHECKPOINT_PATH=< GCS path for saving converted checkpoint> # e.g., gs://my-bucket/my-model-checkpoint
5959 export LOCAL_META_CHECKPOINT_PATH=< local path for META checkpoint> # e.g., /local/meta-ckpt
6060 ```
6161#### Running the weight conversion script
6262
63- Using 11ama -7b as an example:
63+ Using llama -7b as an example:
6464
6565``` bash
6666python3 -m MaxText.utils.ckpt_scripts.llama_or_mistral_ckpt \
@@ -92,7 +92,7 @@ Post finetuning or pre-training, MaxText also provides scripts to convert MaxTex
9292 ```
9393- Running the conversion script
9494
95- The following example is executing a v6e-8 TPU VM with llama2-7b .
95+ Below is a sample for LLama2-7b on v6e-8 TPU VM.
9696
9797 ``` bash
9898 python3 -m MaxText.utils.ckpt_scripts.llama_mistral_mixtral_orbax_to_hf \
0 commit comments