File size: 1,342 Bytes
72b2497 94f00b0 a2d4dd9 94f00b0 a2d4dd9 94f00b0 72b2497 94f00b0 72b2497 a2d4dd9 72b2497 94f00b0 a9256ea a2d4dd9 94f00b0 a9256ea a2d4dd9 94f00b0 a2d4dd9 94f00b0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
library_name: peft
tags:
- code
- instruct
- mistral
datasets:
- cognitivecomputations/dolphin-coder
base_model: mistralai/Mistral-7B-v0.1
license: apache-2.0
---
### Finetuning Overview:
**Model Used:** mistralai/Mistral-7B-v0.1
**Dataset:** cognitivecomputations/dolphin-coder
#### Dataset Insights:
[Dolphin-Coder](https://huggingface.co/datasets/cognitivecomputations/dolphin-coder) dataset – a high-quality collection of 100,000+ coding questions and responses. It's perfect for supervised fine-tuning (SFT), and teaching language models to improve on coding-based tasks.
#### Finetuning Details:
With the utilization of [MonsterAPI](https://monsterapi.ai)'s [no-code LLM finetuner](https://monsterapi.ai/finetuning), this finetuning:
- Was achieved with great cost-effectiveness.
- Completed in a total duration of 7hrs 36min for 0.5 epochs using an A6000 48GB GPU.
- Costed `$15.2` for the entire run
#### Hyperparameters & Additional Details:
- **Epochs:** 0.5
- **Cost for full run:** $15.2
- **Model Path:** mistralai/Mistral-7B-v0.1
- **Learning Rate:** 0.0002
- **Data Split:** 100% train
- **Gradient Accumulation Steps:** 128
- **lora r:** 32
- **lora alpha:** 64

---
license: apache-2.0 |