File size: 2,815 Bytes
e1e6c06 a40cc23 a5404c0 1b70dd3 a40cc23 e1e6c06 e5685fd e1e6c06 0524438 167e01a 9852bf3 452a526 cf4ea16 b6577a8 b6dd522 fc0f978 d23b6ac 712246d fc76b9a 88377cf fd25965 26248d9 46f198b ea7fffc 0978188 f6c8195 09cdcee a40cc23 e1e6c06 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: casarf/comment_model_test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# casarf/comment_model_test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2065
- Validation Loss: 0.6270
- Train Accuracy: 0.7349
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 205, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2042 | 0.6270 | 0.7349 | 0 |
| 0.2066 | 0.6270 | 0.7349 | 1 |
| 0.2124 | 0.6270 | 0.7349 | 2 |
| 0.2138 | 0.6270 | 0.7349 | 3 |
| 0.2062 | 0.6270 | 0.7349 | 4 |
| 0.2135 | 0.6270 | 0.7349 | 5 |
| 0.2113 | 0.6270 | 0.7349 | 6 |
| 0.2019 | 0.6270 | 0.7349 | 7 |
| 0.2055 | 0.6270 | 0.7349 | 8 |
| 0.2129 | 0.6270 | 0.7349 | 9 |
| 0.2129 | 0.6270 | 0.7349 | 10 |
| 0.2058 | 0.6270 | 0.7349 | 11 |
| 0.2016 | 0.6270 | 0.7349 | 12 |
| 0.2053 | 0.6270 | 0.7349 | 13 |
| 0.2114 | 0.6270 | 0.7349 | 14 |
| 0.2037 | 0.6270 | 0.7349 | 15 |
| 0.2063 | 0.6270 | 0.7349 | 16 |
| 0.2006 | 0.6270 | 0.7349 | 17 |
| 0.2114 | 0.6270 | 0.7349 | 18 |
| 0.2065 | 0.6270 | 0.7349 | 19 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|