Update README.md
Browse files
README.md
CHANGED
|
@@ -4,46 +4,6 @@ This is a replica of Alpaca by Stanford' tatsu
|
|
| 4 |
|
| 5 |
Trained using the original instructions with a minor modification in FSDP mode
|
| 6 |
|
| 7 |
-
# Other versions:
|
| 8 |
-
13B: https://huggingface.co/chavinlo/alpaca-13b
|
| 9 |
-
|
| 10 |
-
13B -> GPT4 : https://huggingface.co/chavinlo/gpt4-x-alpaca
|
| 11 |
-
|
| 12 |
-
## Compute Used
|
| 13 |
-
Trained on 4xA100s for 6H
|
| 14 |
-
Donated by redmond.ai
|
| 15 |
-
|
| 16 |
-
NO LORA HAS BEEN USED, this is a natively-finetuned model, hence "alpaca-native"
|
| 17 |
-
|
| 18 |
-
If you are interested on more llama-based models, you can check out my profile or search for other models at https://huggingface.co/models?other=llama
|
| 19 |
-
|
| 20 |
-
This (MIGHT) be a quantized version of this model, but be careful: https://boards.4channel.org/g/thread/92173062#p92182396
|
| 21 |
-
|
| 22 |
-
CONFIGURATION (default except fsdp):
|
| 23 |
-
|
| 24 |
-
```shell
|
| 25 |
-
torchrun --nproc_per_node=4 --master_port=3045 train.py \
|
| 26 |
-
--model_name_or_path /workspace/llama-7b-hf \
|
| 27 |
-
--data_path ./alpaca_data.json \
|
| 28 |
-
--bf16 True \
|
| 29 |
-
--output_dir /workspace/output \
|
| 30 |
-
--num_train_epochs 3 \
|
| 31 |
-
--per_device_train_batch_size 4 \
|
| 32 |
-
--per_device_eval_batch_size 4 \
|
| 33 |
-
--gradient_accumulation_steps 8 \
|
| 34 |
-
--evaluation_strategy "no" \
|
| 35 |
-
--save_strategy "steps" \
|
| 36 |
-
--save_steps 200 \
|
| 37 |
-
--save_total_limit 1 \
|
| 38 |
-
--learning_rate 2e-5 \
|
| 39 |
-
--weight_decay 0. \
|
| 40 |
-
--warmup_ratio 0.03 \
|
| 41 |
-
--lr_scheduler_type "cosine" \
|
| 42 |
-
--logging_steps 1 \
|
| 43 |
-
--fsdp "shard_grad_op auto_wrap" \
|
| 44 |
-
--fsdp_transformer_layer_cls_to_wrap 'LLaMADecoderLayer' \
|
| 45 |
-
--tf32 True --report_to="wandb"
|
| 46 |
-
```
|
| 47 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
| 48 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_chavinlo__alpaca-native)
|
| 49 |
|
|
|
|
| 4 |
|
| 5 |
Trained using the original instructions with a minor modification in FSDP mode
|
| 6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
| 8 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_chavinlo__alpaca-native)
|
| 9 |
|