QWEN_Finetuned_HRV

Prompt
-

Model description

This model fine-tunes Qwen2.5-0.5B-Instruct using LoRA adapters to generate personalized Heart Rate Variability (HRV) feedback. It uses lightweight parameter-efficient fine-tuning (PEFT), meaning only a small set of LoRA weights are trained while the base model remains frozen — making it fast and memory-efficient.

At runtime:

The base model (Qwen2.5-0.5B-Instruct) is loaded with transformers.

The LoRA adapter weights are merged dynamically using PeftModel.from_pretrained().

The model reads structured HRV data (e.g., RMSSD, StressScore, device type).

It interprets physiological patterns and generates human-style health insights in natural language.

The adapter specializes the base model in interpreting HRV metrics and generating concise wellness summaries that reflect stress balance, recovery capacity, and overall autonomic performance.

CODE :

from datasets import load_dataset from transformers import AutoTokenizer, AutoModelForCausalLM from peft import LoraConfig, get_peft_model from trl import SFTTrainer from transformers import TrainingArguments import os import torch

BASE_MODEL = "Qwen/Qwen2.5-0.5B-Instruct" # alt: "Qwen/Qwen2.5-0.5B-Instruct"

data_path = "/kaggle/input/hrv-data/output.jsonl" assert os.path.exists(data_path)

tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL, use_fast=True) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "right"

model

model = AutoModelForCausalLM.from_pretrained( BASE_MODEL, device_map="auto", torch_dtype=torch.float16, # safe on all Kaggle GPUs )

grad checkpointing can interfere; turn off for now

model.config.use_cache = False

LoRA

peft_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.05, target_modules=["q_proj","k_proj","v_proj","o_proj","gate_proj","up_proj","down_proj"], task_type="CAUSAL_LM", bias="none" ) model = get_peft_model(model, peft_config)

force LoRA params to be trainable

for n, p in model.named_parameters(): if "lora" in n: p.requires_grad = True

simple sanity check

trainable = sum(p.numel() for p in model.parameters() if p.requires_grad) total = sum(p.numel() for p in model.parameters()) print(f"Trainable params: {trainable:,} / {total:,}")

==== data ====

dataset = load_dataset("json", data_files=data_path, split="train")

messages -> chat prompt

def build_prompt_from_messages(msgs): sys_msg = next((m["content"] for m in msgs if m.get("role") == "system"), "") usr_msg = next((m["content"] for m in msgs if m.get("role") == "user"), "") return ( f"<|im_start|>system\n{sys_msg.strip()}<|im_end|>\n" f"<|im_start|>user\n{usr_msg.strip()}<|im_end|>\n" f"<|im_start|>assistant\n" )

def formatting_func(samples): texts = [] for msgs in samples["messages"]: sys_msg = next((m["content"] for m in msgs if m.get("role") == "system"), "") usr_msg = next((m["content"] for m in msgs if m.get("role") == "user"), "") ans_msg = next((m["content"] for m in msgs if m.get("role") == "assistant"), "") text = ( f"<|im_start|>system\n{sys_msg.strip()}<|im_end|>\n" f"<|im_start|>user\n{usr_msg.strip()}<|im_end|>\n" f"<|im_start|>assistant\n{ans_msg.strip()}{tokenizer.eos_token}" ) texts.append(text) return texts

==== training ====

train_args = TrainingArguments( output_dir="/kaggle/working/qwen-hrv-lora", num_train_epochs=2, per_device_train_batch_size=1, gradient_accumulation_steps=16, learning_rate=2e-4, lr_scheduler_type="cosine", warmup_ratio=0.05, logging_steps=10, save_steps=200, save_total_limit=2, fp16=True, # use fp16 instead of bf16 optim="adamw_torch", # simple and stable without bnb gradient_checkpointing=False,# off to avoid grad issues max_grad_norm=0.3, report_to="none" )

trainer = SFTTrainer( model=model, tokenizer=tokenizer, train_dataset=dataset, formatting_func=formatting_func, max_seq_length=1024, # trim if OOM packing=False, # safer for small sets args=train_args, )

trainer.train()

==== save ====

adapter_dir = "/kaggle/working/qwen-hrv-lora/adapter" trainer.model.save_pretrained(adapter_dir) tokenizer.save_pretrained("/kaggle/working/qwen-hrv-lora") print(f"Saved LoRA adapter to: {adapter_dir}")

INFERENCE :

from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer import torch, os

BASE_MODEL = "Qwen/Qwen2.5-0.5B-Instruct" ADAPTER_PATH = "/kaggle/input/qwen-hrv/qwen-hrv-lora/adapter"

tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL, use_fast=True) base = AutoModelForCausalLM.from_pretrained( BASE_MODEL, device_map="auto", torch_dtype=torch.float16 )

ensure path is local, not a repo id

assert os.path.exists(os.path.join(ADAPTER_PATH, "adapter_config.json")), "adapter_config.json not found"

model = PeftModel.from_pretrained(base, ADAPTER_PATH, is_trainable=False) model.eval()

sample HRV input

system_text = "You are a health insights assistant. Analyze HRV data." user_text = "- baseline: 0.0\n- RMSSD: 110.01\n- StressScore: 53.28\n- device: other\n- RMSSD_zone: high\n- StressScore_zone: moderate"

prompt = f"<|im_start|>system\n{system_text}<|im_end|>\n<|im_start|>user\n{user_text}<|im_end|>\n<|im_start|>assistant\n"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device) with torch.no_grad(): output = model.generate( **inputs, max_new_tokens=400, temperature=0.3, top_p=0.9, pad_token_id=tokenizer.eos_token_id )

result = tokenizer.decode(output[0], skip_special_tokens=True) print(result.split("<|im_start|>assistant")[-1].strip())

RESULT :

system You are a health insights assistant. Analyze HRV data. user

  • baseline: 0.0
  • RMSSD: 110.01
  • StressScore: 53.28
  • device: other
  • RMSSD_zone: high
  • StressScore_zone: moderate assistant Your HRV (heart rate variability) profile shows some promising signs, but it also points to a mix of stress and recovery needs. Here’s an analysis based on the provided data:

Baseline: At this point, your heart rhythm is stable, indicating that long-term trends have been well managed.

RMSSD: High values reflect strong parasympathetic activity, which supports effective recovery and adaptability.

StressScore: Moderate levels suggest your system is handling some stress load effectively, though not entirely balanced.

device: Mentioning "other" indicates that while HRV readings come from multiple sources, they might not be directly linked to your personal lifestyle habits or environmental conditions.

Zone breakdown:

  • High: This suggests your autonomic balance is currently favoring relaxation and recovery.
  • Moderate: While not perfect, this reflects a healthy state with manageable stress load.

Overall assessment:

  • Recovery potential is strong, as evidenced by the high RMSSD and moderate stress score.
  • Stress regulation appears balanced, though the mixed zone signals that your body may still be under some strain.

Suggestions:

  • Maintain current positive habits such as regular exercise, good sleep, and hydration to sustain recovery capacity.
  • Incorporate relaxation practices like mindfulness, deep breathing, or meditation to reinforce resilience.
  • Monitor future HRV readings closely to ensure they remain in line with your overall recovery patterns.
  • Be aware that stress can sometimes fluctuate between high and moderate levels, so adjust routines accordingly if needed. add Codeadd Markdown

Download model

Download them in the Files & versions tab.

Downloads last month
14
Inference Examples
Examples
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ijaz248/qwen-hrv-lora

Base model

Qwen/Qwen2.5-0.5B
Adapter
(362)
this model