Medical LLM SpiderCore 14B

Model Description

Medical LLM SpiderCore 14B is a large language model specialized for Korean medical domain. Based on the Qwen3 architecture, it is optimized for medical question-answering and clinical reasoning tasks.

Model Details

  • Model Architecture: Qwen3ForCausalLM
  • Parameters: 14B
  • Languages: Korean, English
  • Specialization: Medical, Healthcare

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "edwinkim/medical_llm_spidercore_14B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Example usage
prompt = "The patient presents with the following symptoms: headache, fever, cough. What are the possible diagnoses?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Limitations and Warnings

โš ๏ธ Not for Medical Diagnosis

This model should only be used for educational and research purposes. Do not use this model for actual medical diagnosis or treatment decisions. Always consult with medical professionals.

Downloads last month
44
Safetensors
Model size
15B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for edwinkim/medical_llm_spidercore_14B

Finetuned
Qwen/Qwen3-14B
Finetuned
(144)
this model
Quantizations
2 models