This code demonstrates how to generate responses using MedCEG.
import transformers
import torch
# 1. Load Model & Tokenizer
model_id = "XXX/MedCEG"
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
model = transformers.AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
# 2. Define Input
question = "A 78-year-old Caucasian woman presented with..."
suffix = "\nPut your final answer in \\boxed{}."
messages = [{"role": "user", "content": question + suffix}]
# 3. Generate
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=8196, do_sample=False)
decoded_response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
print(decoded_response)
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support