OLMo-2-7B Fine-tuned on AICB Benchmark
This model is a fine-tuned version of allenai/OLMo-2-1124-7B-Instruct on the AICB (Analog Integrated Circuit Benchmark) dataset.
Model Details
- Base Model: allenai/OLMo-2-1124-7B-Instruct
- Training Data: AICB Benchmark (520 QA samples on analog circuits)
- Task: Multiple-choice question answering for analog integrated circuits
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("vikrahul77/OLMo-2-7B-AICB-SFT")
tokenizer = AutoTokenizer.from_pretrained("vikrahul77/OLMo-2-7B-AICB-SFT")
# Example usage
messages = [
{"role": "user", "content": "Your question about analog circuits here..."}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors='pt')
outputs = model.generate(inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
Training Details
- Framework: Oumi
- Training Type: Supervised Fine-Tuning (SFT)
- Hardware: NVIDIA H100 PCIe
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for vikrahul77/OLMo-2-7B-AICB-SFT
Base model
allenai/OLMo-2-1124-7B
Finetuned
allenai/OLMo-2-1124-7B-SFT
Finetuned
allenai/OLMo-2-1124-7B-DPO
Finetuned
allenai/OLMo-2-1124-7B-Instruct