Model Information

This model is a fine-tuned version of the meta-llama/Llama-3.2-1B-Instruct large language model.

Fine tuning was performed using PEFT (Parameter Efficient Fine Tuning) with LoRA (Low-Rank Adaptation) on the chat subset of the nvidia/Llama-Nemotron-Post-Training-Dataset dataset.

LoRA Configuration:

lora_config = LoraConfig(
    task_type="CAUSAL_LM",
    r=32,
    lora_alpha=32,
    lora_dropout=0.1,
    target_modules=["q_proj", "k_proj", "v_proj"],
    modules_to_save=["lm_head", "embed_token"],
)

Use with Transformers

pip install transformers
pip install torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("suwesh/llamatron-1B-peft").to("cuda")
tokenizer = AutoTokenizer.from_pretrained("suwesh/llamatron-1B-peft")

input_text = "Hello, how are you?"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

Or with pipeline

import torch
import transformers
from transformers import pipeline
pipe = pipeline(
    "text-generation",
    model=("suwesh/llamatron-1B-peft"),
    tokenizer=transformers.AutoTokenizer.from_pretrained("suwesh/llamatron-1B-peft"),
    torch_dtype=torch.bfloat16,
    device="cuda",
)
def to_model(input_text, system_message):
    messages = [
        {"role": "system", "content": system_message},
        {"role": "user", "content": input_text}
    ]
    outputs = pipe(
        messages,
        max_new_tokens=512,
        temperature=0.6,
        top_p=0.95
    )
    return outputs[0]["generated_text"][-1]['content']

response = to_model("Write a joke about windows.", "detailed thinking on")

Load adapter checkpoint for further fine tuning

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("suwesh/llamatron-1B-peft")
model = PeftModel.from_pretrained(base_model, "suwesh/llamatron-1B-peft", subfolder="checkpoint-11000")

Training details

Initial Training and Validation losses: 1.69 | 1.67
Checkpoint 11000 Training and Validation losses: 1.06 | 1.09

Evaluation details

We use the nvidia/Llama-3.1-Nemotron-Nano LLM as a Judge for evaluating the responses between the base llama 3.2 1b instruct and our PEFT model. The following are the judge's preference for each prompt to the two models, we also provide the ground truth in the prompt to the judge:

base: 122
peft: 388
tie: 29

system_message = """ You are an expert evaluator comparing two AI responses to a user instruction. Use the following criteria: -Clarity -Factual correctness (compared to the reference answer) -Instruction-following -Depth of reasoning Below is the reference answer, which is the ideal or expected response: """

Downloads last month
4
Safetensors
Model size
1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for suwesh/llamatron-1B-peft

Finetuned
(1226)
this model
Quantizations
1 model

Dataset used to train suwesh/llamatron-1B-peft