amolinao87 commited on
Commit
8000553
·
verified ·
1 Parent(s): 7d07619

Model save

Browse files
Files changed (1) hide show
  1. README.md +10 -7
README.md CHANGED
@@ -19,7 +19,7 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.4791
23
 
24
  ## Model description
25
 
@@ -38,21 +38,24 @@ More information needed
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
- - learning_rate: 3e-05
42
- - train_batch_size: 2
43
- - eval_batch_size: 2
44
  - seed: 42
 
 
45
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
- - num_epochs: 2
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
- | 1.9751 | 1.0 | 60 | 0.4862 |
55
- | 1.4621 | 2.0 | 120 | 0.4791 |
 
56
 
57
 
58
  ### Framework versions
 
19
 
20
  This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.4398
23
 
24
  ## Model description
25
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 1e-05
42
+ - train_batch_size: 1
43
+ - eval_batch_size: 1
44
  - seed: 42
45
+ - gradient_accumulation_steps: 2
46
+ - total_train_batch_size: 2
47
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: linear
49
+ - num_epochs: 3
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss |
55
  |:-------------:|:-----:|:----:|:---------------:|
56
+ | 2.574 | 1.0 | 60 | 0.4433 |
57
+ | 2.5626 | 2.0 | 120 | 0.4411 |
58
+ | 3.1912 | 3.0 | 180 | 0.4398 |
59
 
60
 
61
  ### Framework versions