|
--- |
|
license: mit |
|
language: |
|
- en |
|
base_model: |
|
- deepseek-ai/deepseek-coder-6.7b-instruct |
|
tags: |
|
- code |
|
- Festi |
|
--- |
|
# Festi Coder LoRA 2025-06 |
|
|
|
This is a LoRA fine-tuned version of `deepseek-coder-6.7b-instruct`, optimized for generating and understanding code built on the [Festi Framework](https://festi.io). The model is designed to assist with plugin generation, trait and service scaffolding, and other automation tasks relevant to the Festi ecosystem. |
|
|
|
--- |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
- **Developed by:** Festi |
|
- **Model type:** Causal Language Model with LoRA fine-tuning |
|
- **Base model:** [`deepseek-coder-6.7b-instruct`](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) |
|
- **Language(s):** English, PHP (Festi-specific syntax and DSL) |
|
- **License:** [To be specified — likely mirrors base model] |
|
- **Fine-tuned with:** PEFT + LoRA |
|
|
|
--- |
|
|
|
## Uses |
|
|
|
### Direct Use |
|
|
|
This model is intended for developers using the Festi Framework who want to: |
|
|
|
- Generate new plugins (e.g., SubscribePlugin) |
|
- Scaffold services, traits, CLI commands |
|
- Complete and explain Festi-specific PHP code |
|
|
|
### Out-of-Scope Use |
|
|
|
- General NLP tasks (e.g., chat, summarization) |
|
- Non-Festi PHP applications |
|
- High-stakes decision making |
|
|
|
--- |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
This model is domain-specific and not suitable for general-purpose programming. Generated code may require manual review, especially in production settings. It inherits any limitations and biases from its base model (`deepseek-coder-6.7b-instruct`). |
|
|
|
### Recommendations |
|
|
|
- Always review generated code. |
|
- Do not expose model outputs directly to end-users without validation. |
|
|
|
--- |
|
|
|
## How to Get Started with the Model |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
from peft import PeftModel, PeftConfig |
|
|
|
peft_model_id = "Festi/festi-coder-lora-2025-06" |
|
base_model = "deepseek-ai/deepseek-coder-6.7b-instruct" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(base_model) |
|
model = AutoModelForCausalLM.from_pretrained(base_model) |
|
model = PeftModel.from_pretrained(model, peft_model_id) |
|
|
|
prompt = "<|user|>\nCreate a plugin to collect emails for a newsletter subscription.\n<|assistant|>\n" |
|
inputs = tokenizer(prompt, return_tensors="pt").to(model.device) |
|
outputs = model.generate(**inputs, max_new_tokens=256) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |