File size: 2,789 Bytes
2ba3bb9
 
 
bf5e05e
2ba3bb9
 
 
 
 
 
 
 
 
 
 
9d87942
 
07e8cbe
9d87942
41d7126
 
5fbb6a4
07e8cbe
4a51a70
07e8cbe
 
 
4a51a70
07e8cbe
9d87942
 
5fbb6a4
 
 
 
 
9d87942
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d6ce41b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9d87942
 
 
9e06c72
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
license: mit
datasets:
- flwrlabs/code-alpaca-20k
language:
- en
metrics:
- accuracy
base_model:
- Qwen/Qwen2.5-Coder-0.5B-Instruct
pipeline_tag: text-generation
library_name: peft
tags:
- text-generation-inference
- code
---

# Model Card for FlowerTune-Qwen2.5-Coder-0.5B-Instruct-PEFT

![Training Loss](./train_loss.png)

## Evaluation Results (Accuracy)

- **MBPP**:  25.60 %
- **HumanEval**: 37.81 %
- **MultiPL-E (JS)**: 41.00 %
- **MultiPL-E (C++)**: 32.92 %
- **Average**: 34.34 %

## Model Details

This PEFT adapter has been trained by using [Flower](https://flower.ai/), a friendly federated AI framework.

The adapter and benchmark results have been submitted to the [FlowerTune LLM Code Leaderboard](https://flower.ai/benchmarks/llm-leaderboard/code/).

Please check the following GitHub project for details on how to reproduce training and evaluation steps:

[https://github.com/ethicalabs-ai/FlowerTune-Qwen2.5-Coder-0.5B-Instruct/](https://github.com/ethicalabs-ai/FlowerTune-Qwen2.5-Coder-0.5B-Instruct/)

## How to Get Started with the Model

Use this model as:

```
from peft import PeftModel
from transformers import AutoModelForCausalLM

base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-Coder-0.5B-Instruct")
model = PeftModel.from_pretrained(base_model, "ethicalabs/FlowerTune-Qwen2.5-Coder-0.5B-Instruct")
```

## Communication Budget

8766.51 MB Megabytes

## Virtual Machine Details

For this experiment, I utilized [CUDO Compute](https://www.cudocompute.com/?via=flowertune-llm) as the GPU compute provider.

| **Component** | **Specification**    |
|---------------|----------------------|
| **GPU**       | 1 × RTX A4000 16 GB  |
| **vCPUs**     | 4                    |
| **CPU**       | AMD EPYC (Milan)     |
| **Memory**    | 16 GB                |

## Cost Breakdown

### Compute Costs

| **Component** | **Details**   | **Cost/hr** |
|---------------|---------------|-------------|
| vCPUs         | 4 cores       | $0.0088/hr  |
| Memory        | 16 GB         | $0.056/hr   |
| GPU           | 1 × RTX A4000  | $0.25/hr    |

### Storage Costs

| **Component**    | **Details** | **Cost/hr** |
|------------------|-------------|-------------|
| Boot Disk Size   | 70 GB       | $0.0077/hr  |

### Network Costs

| **Component**         | **Details** | **Cost/hr** |
|-----------------------|-------------|-------------|
| Public IPv4 Address   | N/A         | $0.005/hr   |

### Total Cost

| **Total Cost/hr** |
|-------------------|
| **$0.3275/hr**    |

### Simulation Details

| **Parameter**      | **Value**              |
|--------------------|------------------------|
| **Runtime**        | 1924.52 seconds (00:32:04) |
| **Simulation Cost**| **$0.18**              |

### Framework versions

- PEFT 0.14.0
- Flower 1.13.1