Model Card for amit-s-agrahari-coder-lora-v1
This is a LoRA fine-tuned version of the deepseek-ai/deepseek-coder-1.3b-base
model, created to generate C programming solutions for algorithmic problems.
It was trained using PEFT (Parameter-Efficient Fine-Tuning) on a curated set of C programming tasks.
Model Details
Model Description
This is a parameter-efficient fine-tuned model based on DeepSeek Coder.
It focuses on generating high-quality, compilable C code for algorithmic and structured programming problems.
- Developed by: BlackIIIWhite
- Funded by [optional]: N/A
- Shared by [optional]: BlackIIIWhite
- Model type: Causal Language Model (LoRA fine-tuned)
- Language(s): Primarily C code generation
- License: MIT
- Finetuned from model: deepseek-ai/deepseek-coder-1.3b-base
Model Sources
- Repository: BlackIIIWhite/amit-s-agrahari-coder-lora-v1
- Paper: N/A
- Demo: Coming soon
Uses
Direct Use
This model can be directly used for:
- Generating C programming solutions for algorithmic challenges
- Code completion and function generation
- Educational purposes to demonstrate LoRA fine-tuning
Downstream Use
- Further fine-tuning for other programming languages
- Integration in code-assistant applications
Out-of-Scope Use
- Deploying in production environments without human code review
- Security-critical or safety-critical applications
- Generating sensitive or proprietary code without verification
Bias, Risks, and Limitations
The model may:
- Produce incorrect or unoptimized code
- Miss edge cases
- Reflect biases present in its training data
Recommendations
Always review and test generated code. This model is for educational and research purposes.
How to Get Started with the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = "deepseek-ai/deepseek-coder-1.3b-base"
adapter_model = "BlackIIIWhite/amit-s-agrahari-coder-lora-v1"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)
prompt = "Write a C program to calculate factorial of a number."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for BlackIIIWhite/amit-s-agrahari-coder-lora-v1
Base model
deepseek-ai/deepseek-coder-1.3b-base