repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Rocketknight1/t5-small-finetuned-xsum
|
Rocketknight1
|
t5
| 17 | 57 |
transformers
| 0 |
text2text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,540 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.7172
- Validation Loss: 2.3977
- Train Rouge1: 28.7469
- Train Rouge2: 7.9005
- Train Rougel: 22.5917
- Train Rougelsum: 22.6162
- Train Gen Len: 18.875
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 2.7172 | 2.3977 | 28.7469 | 7.9005 | 22.5917 | 22.6162 | 18.875 | 0 |
### Framework versions
- Transformers 4.16.0.dev0
- TensorFlow 2.8.0-rc0
- Datasets 1.17.0
- Tokenizers 0.11.0
|
d22e271ee72fb8695ef49c6fdbef0d80
|
sd-concepts-library/minecraft-concept-art
|
sd-concepts-library
| null | 9 | 0 | null | 10 | null | false | false | false |
mit
| null | null | null | 1 | 0 | 1 | 0 | 3 | 3 | 0 |
[]
| false | true | true | 1,068 | false |
### minecraft-concept-art on Stable Diffusion
This is the `<concept>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
af1899c249e7ccf1d7876d477c93defa
|
fathyshalab/all-roberta-large-v1-meta-5-16-5
|
fathyshalab
|
roberta
| 11 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,507 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-meta-5-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4797
- Accuracy: 0.28
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7721 | 1.0 | 1 | 2.6529 | 0.1889 |
| 2.2569 | 2.0 | 2 | 2.5866 | 0.2333 |
| 1.9837 | 3.0 | 3 | 2.5340 | 0.2644 |
| 1.6425 | 4.0 | 4 | 2.4980 | 0.2756 |
| 1.4612 | 5.0 | 5 | 2.4797 | 0.28 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
9038d3f29e215d9c0a84d787510b65fb
|
lilykaw/distilbert-base-uncased-finetuned-stsb
|
lilykaw
|
distilbert
| 13 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,571 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-stsb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5634
- Pearson: 0.8680
- Spearmanr: 0.8652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| No log | 1.0 | 360 | 0.6646 | 0.8516 | 0.8494 |
| 1.0238 | 2.0 | 720 | 0.5617 | 0.8666 | 0.8637 |
| 0.3952 | 3.0 | 1080 | 0.6533 | 0.8649 | 0.8646 |
| 0.3952 | 4.0 | 1440 | 0.5889 | 0.8651 | 0.8625 |
| 0.2488 | 5.0 | 1800 | 0.5634 | 0.8680 | 0.8652 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
fff453a833641c7f90dac93cd69ca546
|
Kaludi/CSGO-Minimap-Layout-Generation
|
Kaludi
| null | 4 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers', 'cs:go', 'topview', 'map generator', 'layout', 'layout generator', 'map', 'csgo', 'improved layout', 'radar']
| false | true | true | 2,862 | false |
# CSGO Minimap Layout Generation

This is an improved AI model of my previous model trained on CS:GO's radar top view images of many maps which can now produce custom map layouts in seconds. This model does not produce red or green boxes like in my previous model. The tag for this model is **"radar-topview"**. If you'd like to get a map layout similar to a specific map, you can add the map name before "radar-topview". So if I wanted a map generation similar to dust2, I would write **"dust2-radar-topview"**.
**Try the following prompt to get the best results:**
"fps radar-topview game map, flat shading, soft shadows, global illumination"
"fps radar topview map, polygonal, gradient background, pastel colors, soft shadows, global illumination, straight lines, insanely detailed"
**Map Radar Topviews this AI was trained on:**
de_dust2
de_inferno
de_nuke
de_mirage
de_cache
de_train
de_cobblestone
de_castle
de_overpass
**Have fun generating map layouts!**
### CompVis
[Download csgoTopViewMapLayout.ckpt) (2.9GB)](https://huggingface.co/Kaludi/CSGO-Minimap-Layout-Generation/blob/main/csgoMiniMapLayoutsV2.ckpt)
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
```python
from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler
import torch
prompt = (
"fps radar-topview game map, flat shading, soft shadows, global illumination")
model_id = "Kaludi/CSGO-Improved-Radar-Top-View-Map-Layouts"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
image = pipe(prompt, num_inference_steps=30).images[0]
image.save("./result.jpg")
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
c35dc79f1bc6e8535c132d9d5a803570
|
malay-patel/bert-ww-finetuned-squad
|
malay-patel
|
bert
| 8 | 12 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,755 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# malay-patel/bert-ww-finetuned-squad
This model is a fine-tuned version of [bert-large-cased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-cased-whole-word-masking-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1766
- Train End Logits Accuracy: 0.9455
- Train Start Logits Accuracy: 0.9312
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16638, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:-----:|
| 0.5635 | 0.8374 | 0.7992 | 0 |
| 0.3369 | 0.8987 | 0.8695 | 1 |
| 0.1766 | 0.9455 | 0.9312 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
688a9cd20c5cff890a246aa62d1e77a1
|
din0s/bart-pt-asqa-ob
|
din0s
|
bart
| 11 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,666 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-pt-asqa-ob
This model is a fine-tuned version of [vblagoje/bart_lfqa](https://huggingface.co/vblagoje/bart_lfqa) on the [ASQA](https://huggingface.co/datasets/din0s/asqa) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6901
- Rougelsum: 20.7527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| No log | 1.0 | 355 | 1.6295 | 17.7502 |
| 1.6407 | 2.0 | 710 | 1.6144 | 18.5897 |
| 1.4645 | 3.0 | 1065 | 1.6222 | 19.3778 |
| 1.4645 | 4.0 | 1420 | 1.6522 | 19.6941 |
| 1.3678 | 5.0 | 1775 | 1.6528 | 20.3110 |
| 1.2671 | 6.0 | 2130 | 1.6879 | 20.6112 |
| 1.2671 | 7.0 | 2485 | 1.6901 | 20.7527 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
a50e35347134211d9bd0a2a3612ea460
|
victorialslocum/en_reciparse_model
|
victorialslocum
| null | 17 | 0 |
spacy
| 1 |
token-classification
| false | false | false |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['spacy', 'token-classification']
| false | true | true | 682 | false |
| Feature | Description |
| --- | --- |
| **Name** | `en_reciparse_model` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.3.1,<3.4.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `INGREDIENT` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 87.97 |
| `ENTS_P` | 88.54 |
| `ENTS_R` | 87.40 |
| `TOK2VEC_LOSS` | 37557.71 |
| `NER_LOSS` | 19408.65 |
|
4132923bcfd606c7bf907e815c2794c6
|
atrevidoantonio/atrevidoantonio1
|
atrevidoantonio
| null | 19 | 3 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 547 | false |
### atrevidoantonio1 Dreambooth model trained by atrevidoantonio with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
.JPG)
|
6b281ecfc7463a797088ca8549287e2e
|
Galuh/wav2vec2-large-xlsr-indonesian
|
Galuh
|
wav2vec2
| 10 | 17 |
transformers
| 1 |
automatic-speech-recognition
| true | false | true |
apache-2.0
|
['id']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 3,564 | false |
# Wav2Vec2-Large-XLSR-Indonesian
This is the model for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned
[facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Indonesian Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Galuh/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("Galuh/wav2vec2-large-xlsr-indonesian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Galuh/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("Galuh/wav2vec2-large-xlsr-indonesian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 18.32 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/galuhsahid/wav2vec2-indonesian)
(will be available soon)
|
c63205a35019efa5adc37820a4fd9adc
|
Laurie/QA-distilbert
|
Laurie
|
distilbert
| 12 | 6 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,247 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA-distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.0457 |
| 2.5775 | 2.0 | 500 | 1.6041 |
| 2.5775 | 3.0 | 750 | 1.5374 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
b9f4473ce48b0e85a08b0732c91e7d5a
|
sd-concepts-library/egorey
|
sd-concepts-library
| null | 9 | 0 | null | 1 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 983 | false |
### egorey on Stable Diffusion
This is the `<gorey>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
4a95ca226c23146a101b89302f943109
|
Helsinki-NLP/opus-mt-lu-en
|
Helsinki-NLP
|
marian
| 10 | 21 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-lu-en
* source languages: lu
* target languages: en
* OPUS readme: [lu-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lu-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lu-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lu.en | 35.7 | 0.517 |
|
4fccd7c0c1791292dad81713619a3c2b
|
cansen88/PromptGenerator_5_topic
|
cansen88
|
gpt2
| 9 | 2 |
transformers
| 0 |
text-generation
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,660 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# PromptGenerator_5_topic
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 10.6848
- Validation Loss: 10.6672
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -999, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.6864 | 10.6743 | 0 |
| 10.7045 | 10.6736 | 1 |
| 10.7114 | 10.6722 | 2 |
| 10.7082 | 10.6701 | 3 |
| 10.6848 | 10.6672 | 4 |
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
433f773202ece90975fedac31e78cf2b
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_mrpc_128
|
gokuls
|
mobilebert
| 17 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,542 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_mrpc_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5213
- Accuracy: 0.6740
- F1: 0.7787
- Combined Score: 0.7264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6368 | 1.0 | 29 | 0.5564 | 0.6838 | 0.8122 | 0.7480 |
| 0.6099 | 2.0 | 58 | 0.5557 | 0.6838 | 0.8122 | 0.7480 |
| 0.611 | 3.0 | 87 | 0.5555 | 0.6838 | 0.8122 | 0.7480 |
| 0.6101 | 4.0 | 116 | 0.5568 | 0.6838 | 0.8122 | 0.7480 |
| 0.608 | 5.0 | 145 | 0.5540 | 0.6838 | 0.8122 | 0.7480 |
| 0.6037 | 6.0 | 174 | 0.5492 | 0.6838 | 0.8122 | 0.7480 |
| 0.5761 | 7.0 | 203 | 0.6065 | 0.6103 | 0.6851 | 0.6477 |
| 0.4782 | 8.0 | 232 | 0.5341 | 0.6863 | 0.7801 | 0.7332 |
| 0.4111 | 9.0 | 261 | 0.5213 | 0.6740 | 0.7787 | 0.7264 |
| 0.3526 | 10.0 | 290 | 0.5792 | 0.6863 | 0.7867 | 0.7365 |
| 0.3188 | 11.0 | 319 | 0.5760 | 0.6936 | 0.7764 | 0.7350 |
| 0.2918 | 12.0 | 348 | 0.6406 | 0.6912 | 0.7879 | 0.7395 |
| 0.2568 | 13.0 | 377 | 0.5908 | 0.6765 | 0.7537 | 0.7151 |
| 0.2472 | 14.0 | 406 | 0.5966 | 0.6863 | 0.7664 | 0.7263 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
1727529bd7b09a7b42603eb5fafcd210
|
michelecafagna26/vinvl-base-finetuned-hl-scenes-image-captioning
|
michelecafagna26
|
bert
| 7 | 2 |
pytorch
| 0 |
image-to-text
| true | false | false |
apache-2.0
| null |
['hl-scenes']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'image-to-text']
| false | true | true | 3,732 | false |
# Model Card: VinVL for Captioning 🖼️
[Microsoft's VinVL](https://github.com/microsoft/Oscar) base fine-tuned on [HL-scenes]() dataset for **scene description generation** downstream task.
# Model fine-tuning 🏋️
The model has been finetuned for 10 epochs on the scenes captions of the [HL]() dataset (available on 🤗 HUB: [michelecafagna26/hl](https://huggingface.co/datasets/michelecafagna26/hl))
# Test set metrics 📈
Obtained with beam size 5 and max length 20
| Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | METEOR | ROUGE-L | CIDEr | SPICE |
|--------|--------|--------|--------|--------|---------|-------|-------|
| 0.68 | 0.55 | 0.45 | 0.36 | 0.36 | 0.63 | 1.42 | 0.40 |
# Usage and Installation:
More info about how to install and use this model can be found here: [michelecafagna26/VinVL
](https://github.com/michelecafagna26/VinVL)
# Feature extraction ⛏️
This model has a separate Visualbackbone used to extract features.
More info about:
- the model: [michelecafagna26/vinvl_vg_x152c4](https://huggingface.co/michelecafagna26/vinvl_vg_x152c4)
- the usage: [michelecafagna26/vinvl-visualbackbone](https://github.com/michelecafagna26/vinvl-visualbackbone)
# Quick start: 🚀
```python
from transformers.pytorch_transformers import BertConfig, BertTokenizer
from oscar.modeling.modeling_bert import BertForImageCaptioning
from oscar.wrappers import OscarTensorizer
ckpt = "path/to/the/checkpoint"
device = "cuda" if torch.cuda.is_available() else "cpu"
# original code
config = BertConfig.from_pretrained(ckpt)
tokenizer = BertTokenizer.from_pretrained(ckpt)
model = BertForImageCaptioning.from_pretrained(ckpt, config=config).to(device)
# This takes care of the preprocessing
tensorizer = OscarTensorizer(tokenizer=tokenizer, device=device)
# numpy-arrays with shape (1, num_boxes, feat_size)
# feat_size is 2054 by default in VinVL
visual_features = torch.from_numpy(feat_obj).to(device).unsqueeze(0)
# labels are usually extracted by the features extractor
labels = [['boat', 'boat', 'boat', 'bottom', 'bush', 'coat', 'deck', 'deck', 'deck', 'dock', 'hair', 'jacket']]
inputs = tensorizer.encode(visual_features, labels=labels)
outputs = model(**inputs)
pred = tensorizer.decode(outputs)
# the output looks like this:
# pred = {0: [{'caption': 'in a library', 'conf': 0.7070220112800598]}
```
# Citations 🧾
VinVL model finetuned on scenes descriptions:
```BibTeX
@inproceedings{cafagna-etal-2022-understanding,
title = "Understanding Cross-modal Interactions in {V}{\&}{L} Models that Generate Scene Descriptions",
author = "Cafagna, Michele and
Deemter, Kees van and
Gatt, Albert",
booktitle = "Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.umios-1.6",
pages = "56--72",
abstract = "Image captioning models tend to describe images in an object-centric way, emphasising visible objects. But image descriptions can also abstract away from objects and describe the type of scene depicted. In this paper, we explore the potential of a state of the art Vision and Language model, VinVL, to caption images at the scene level using (1) a novel dataset which pairs images with both object-centric and scene descriptions. Through (2) an in-depth analysis of the effect of the fine-tuning, we show (3) that a small amount of curated data suffices to generate scene descriptions without losing the capability to identify object-level concepts in the scene; the model acquires a more holistic view of the image compared to when object-centric descriptions are generated. We discuss the parallels between these results and insights from computational and cognitive science research on scene perception.",
}
```
Please consider citing the original project and the VinVL paper
```BibTeX
@misc{han2021image,
title={Image Scene Graph Generation (SGG) Benchmark},
author={Xiaotian Han and Jianwei Yang and Houdong Hu and Lei Zhang and Jianfeng Gao and Pengchuan Zhang},
year={2021},
eprint={2107.12604},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@inproceedings{zhang2021vinvl,
title={Vinvl: Revisiting visual representations in vision-language models},
author={Zhang, Pengchuan and Li, Xiujun and Hu, Xiaowei and Yang, Jianwei and Zhang, Lei and Wang, Lijuan and Choi, Yejin and Gao, Jianfeng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5579--5588},
year={2021}
}
```
|
7fc92156e3ffaa858280464c323623be
|
Buntopsih/novgoranstefanovski
|
Buntopsih
| null | 24 | 4 |
diffusers
| 2 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 3 | 2 | 1 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 2,202 | false |
### novgoranstefanovski on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### Model by Buntopsih
This your the Stable Diffusion model fine-tuned the novgoranstefanovski concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **Robert, retro, Greg, Kim**
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Sample pictures of this concept:
Kim
Greg
retro
Robert





|
3a5af9f94fc3ee5b28a9663f52434c62
|
TransQuest/microtransquest-en_cs-it-smt
|
TransQuest
|
xlm-roberta
| 12 | 25 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
|
['en-cs']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['Quality Estimation', 'microtransquest']
| false | true | true | 5,279 | false |
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
from transquest.algo.word_level.microtransquest.run_model import MicroTransQuestModel
import torch
model = MicroTransQuestModel("xlmroberta", "TransQuest/microtransquest-en_cs-it-smt", labels=["OK", "BAD"], use_cuda=torch.cuda.is_available())
source_tags, target_tags = model.predict([["if not , you may not be protected against the diseases . ", "ja tā nav , Jūs varat nepasargāt no slimībām . "]])
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
3a52e3637150656d7cdbdba244001f9f
|
bigmorning/whisper3_0005
|
bigmorning
|
whisper
| 7 | 6 |
transformers
| 0 |
automatic-speech-recognition
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,666 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper3_0005
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.1592
- Train Accuracy: 0.0175
- Validation Loss: 2.8062
- Validation Accuracy: 0.0199
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 5.0832 | 0.0116 | 4.4298 | 0.0124 | 0 |
| 4.3130 | 0.0131 | 4.0733 | 0.0141 | 1 |
| 3.9211 | 0.0146 | 3.6762 | 0.0157 | 2 |
| 3.5505 | 0.0159 | 3.3453 | 0.0171 | 3 |
| 3.1592 | 0.0175 | 2.8062 | 0.0199 | 4 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
d7b5bbd9e44a9aa32a20e3c4586cd78c
|
IIIT-L/xlm-roberta-large-finetuned-TRAC-DS
|
IIIT-L
|
xlm-roberta
| 9 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,553 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-TRAC-DS
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0992
- Accuracy: 0.3342
- Precision: 0.1114
- Recall: 0.3333
- F1: 0.1670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.1187640010910775e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.1358 | 0.25 | 612 | 1.1003 | 0.4436 | 0.1479 | 0.3333 | 0.2049 |
| 1.1199 | 0.5 | 1224 | 1.1130 | 0.4436 | 0.1479 | 0.3333 | 0.2049 |
| 1.1221 | 0.75 | 1836 | 1.0992 | 0.3342 | 0.1114 | 0.3333 | 0.1670 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
84e6d183a4835383ef011feb92b8f9e5
|
corgito/finetuning-sentiment-model-3000-samples
|
corgito
|
distilbert
| 13 | 13 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,053 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3105
- Accuracy: 0.87
- F1: 0.8713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
f4085a13da58234da67dd50fab56902a
|
Helsinki-NLP/opus-mt-nso-fi
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-nso-fi
* source languages: nso
* target languages: fi
* OPUS readme: [nso-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.fi | 27.8 | 0.523 |
|
07e018d5d7947ac8997cd932114efa7e
|
NbAiLab/nb-gpt-j-6B
|
NbAiLab
|
gptj
| 10 | 304 |
transformers
| 8 |
text-generation
| true | false | false |
apache-2.0
|
['no', 'nb', 'nn']
|
['NbAiLab/NCC', 'mc4', 'oscar']
| null | 0 | 0 | 0 | 0 | 2 | 1 | 1 |
['pytorch', 'causal-lm']
| false | true | true | 7,744 | false |
- **Release ✨v1✨** (January 18th, 2023) *[Full-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1), [sharded](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1-sharded), [half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1-float16), and [mesh-transformers-jax](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1-mesh) weights*
<details><summary>All checkpoints</summary>
- **Release v1beta5** (December 18th, 2022) *[Full-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta5), [sharded](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta5-sharded), and [half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta5-float16) weights*
- **Release v1beta4** (October 28th, 2022) *[Full-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta4), [sharded](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta4-sharded), and [half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta4-float16) weights*
- **Release v1beta3** (August 8th, 2022) *[Full-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta3), [sharded](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta3-sharded), and [half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta3-float16) weights*
- **Release v1beta2** (June 18th, 2022) *[Full-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta2), [sharded](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/sharded), and [half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta2-float16) weights*
- **Release v1beta1** (April 28th, 2022) *[Half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta1-float16) weights*
</details>
# NB-GPT-J-6B
## Demo: https://ai.nb.no/demo/nb-gpt-j-6B/ (Be patient, it runs on CPU 😅)
## Model Description
NB-GPT-J-6B is a Norwegian finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters (6 billion parameters).
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
NB-GPT-J-6B was finetuned on [NCC](https://huggingface.co/datasets/NbAiLab/NCC), the Norwegian Colossal Corpus, plus other Internet sources like Wikipedia, mC4, and OSCAR.
## Training procedure
This model was finetuned for 130 billion tokens over 1,000,000 steps on a TPU v3-8 VM. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Intended Use and Limitations
NB-GPT-J-6B learns an inner representation of the Norwegian language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("NbAiLab/nb-gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("NbAiLab/nb-gpt-j-6B")
```
### Limitations and Biases
As the original GPT-J model, the core functionality of NB-GPT-J-6B is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting NB-GPT-J-6B it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon NB-GPT-J-6B to produce factually accurate output.
The original GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. A fine-grained analysis of the bias contained in the corpus used for fine-tuning is still pending.
As with all language models, it is hard to predict in advance how NB-GPT-J-6B will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Evaluation results
We still have to find proper datasets to evaluate the model, so help is welcome!
## Citation and Related Information
### BibTeX entry
To cite this model or the corpus used:
```bibtex
@inproceedings{kummervold2021operationalizing,
title={Operationalizing a National Digital Library: The Case for a Norwegian Transformer Model},
author={Kummervold, Per E and De la Rosa, Javier and Wetjen, Freddy and Brygfjeld, Svein Arne},
booktitle={Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
pages={20--29},
year={2021},
url={https://aclanthology.org/2021.nodalida-main.3/}
}
```
If you use this model, we would love to hear about it! Reach out on twitter, GitHub, Discord, or shoot us an email.
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha. Specially, to [Stella Biderman](https://www.stellabiderman.com) for her general openness, and [Ben Wang](https://github.com/kingoflolz/mesh-transformer-jax) for the main codebase.
|
f6aecb191bc636ebc8a480e50559688d
|
cross-encoder/nli-MiniLM2-L6-H768
|
cross-encoder
|
roberta
| 10 | 3,083 |
transformers
| 2 |
zero-shot-classification
| true | false | false |
apache-2.0
|
['en']
|
['multi_nli', 'snli']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['MiniLMv2']
| false | true | true | 2,421 | false |
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
For evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-MiniLM2-L6-H768')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-MiniLM2-L6-H768')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-MiniLM2-L6-H768')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-MiniLM2-L6-H768')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
```
|
dc99091b60d9b82f4c732b4645a15b84
|
jonatasgrosman/exp_w2v2t_ru_unispeech-ml_s569
|
jonatasgrosman
|
unispeech
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ru']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'ru']
| false | true | true | 500 | false |
# exp_w2v2t_ru_unispeech-ml_s569
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
58a7ed42bb6432a0eaa9f9e3e6c4d73e
|
bert-large-cased
| null |
bert
| 10 | 160,722 |
transformers
| 4 |
fill-mask
| true | true | true |
apache-2.0
|
['en']
|
['bookcorpus', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 9,138 | false |
# BERT large model (cased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is cased: it makes a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-large-cased')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] Hello I'm a male model. [SEP]",
"score":0.22748498618602753,
"token":2581,
"token_str":"male"
},
{
"sequence":"[CLS] Hello I'm a fashion model. [SEP]",
"score":0.09146175533533096,
"token":4633,
"token_str":"fashion"
},
{
"sequence":"[CLS] Hello I'm a new model. [SEP]",
"score":0.05823173746466637,
"token":1207,
"token_str":"new"
},
{
"sequence":"[CLS] Hello I'm a super model. [SEP]",
"score":0.04488750174641609,
"token":7688,
"token_str":"super"
},
{
"sequence":"[CLS] Hello I'm a famous model. [SEP]",
"score":0.03271442651748657,
"token":2505,
"token_str":"famous"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-large-cased')
model = BertModel.from_pretrained("bert-large-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-large-cased')
model = TFBertModel.from_pretrained("bert-large-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-large-cased')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] The man worked as a doctor. [SEP]",
"score":0.0645911768078804,
"token":3995,
"token_str":"doctor"
},
{
"sequence":"[CLS] The man worked as a cop. [SEP]",
"score":0.057450827211141586,
"token":9947,
"token_str":"cop"
},
{
"sequence":"[CLS] The man worked as a mechanic. [SEP]",
"score":0.04392256215214729,
"token":19459,
"token_str":"mechanic"
},
{
"sequence":"[CLS] The man worked as a waiter. [SEP]",
"score":0.03755280375480652,
"token":17989,
"token_str":"waiter"
},
{
"sequence":"[CLS] The man worked as a teacher. [SEP]",
"score":0.03458863124251366,
"token":3218,
"token_str":"teacher"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] The woman worked as a nurse. [SEP]",
"score":0.2572779953479767,
"token":7439,
"token_str":"nurse"
},
{
"sequence":"[CLS] The woman worked as a waitress. [SEP]",
"score":0.16706500947475433,
"token":15098,
"token_str":"waitress"
},
{
"sequence":"[CLS] The woman worked as a teacher. [SEP]",
"score":0.04587847739458084,
"token":3218,
"token_str":"teacher"
},
{
"sequence":"[CLS] The woman worked as a secretary. [SEP]",
"score":0.03577028587460518,
"token":4848,
"token_str":"secretary"
},
{
"sequence":"[CLS] The woman worked as a maid. [SEP]",
"score":0.03298963978886604,
"token":13487,
"token_str":"maid"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy
---------------------------------------- | :-------------: | :----------------:
BERT-Large, Cased (Original) | 91.5/84.8 | 86.09
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
dd87832eb9442c7db1900807ff653c1a
|
weirdguitarist/wav2vec2-base-stac-local
|
weirdguitarist
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,380 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-stac-local
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9746
- Wer: 0.7828
- Cer: 0.3202
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 2.0603 | 1.0 | 2369 | 2.1282 | 0.9517 | 0.5485 |
| 1.6155 | 2.0 | 4738 | 1.6196 | 0.9060 | 0.4565 |
| 1.3462 | 3.0 | 7107 | 1.4331 | 0.8379 | 0.3983 |
| 1.1819 | 4.0 | 9476 | 1.3872 | 0.8233 | 0.3717 |
| 1.0189 | 5.0 | 11845 | 1.4066 | 0.8328 | 0.3660 |
| 0.9026 | 6.0 | 14214 | 1.3502 | 0.8198 | 0.3508 |
| 0.777 | 7.0 | 16583 | 1.3016 | 0.7922 | 0.3433 |
| 0.7109 | 8.0 | 18952 | 1.2662 | 0.8302 | 0.3510 |
| 0.6766 | 9.0 | 21321 | 1.4321 | 0.8103 | 0.3368 |
| 0.6078 | 10.0 | 23690 | 1.3592 | 0.7871 | 0.3360 |
| 0.5958 | 11.0 | 26059 | 1.4389 | 0.7819 | 0.3397 |
| 0.5094 | 12.0 | 28428 | 1.3391 | 0.8017 | 0.3239 |
| 0.4567 | 13.0 | 30797 | 1.4718 | 0.8026 | 0.3347 |
| 0.4448 | 14.0 | 33166 | 1.7450 | 0.8043 | 0.3424 |
| 0.3976 | 15.0 | 35535 | 1.4581 | 0.7888 | 0.3283 |
| 0.3449 | 16.0 | 37904 | 1.5688 | 0.8078 | 0.3397 |
| 0.3046 | 17.0 | 40273 | 1.8630 | 0.8060 | 0.3448 |
| 0.2983 | 18.0 | 42642 | 1.8400 | 0.8190 | 0.3425 |
| 0.2728 | 19.0 | 45011 | 1.6726 | 0.8034 | 0.3280 |
| 0.2579 | 20.0 | 47380 | 1.6661 | 0.8138 | 0.3249 |
| 0.2169 | 21.0 | 49749 | 1.7389 | 0.8138 | 0.3277 |
| 0.2498 | 22.0 | 52118 | 1.7205 | 0.7948 | 0.3207 |
| 0.1831 | 23.0 | 54487 | 1.8641 | 0.8103 | 0.3229 |
| 0.1927 | 24.0 | 56856 | 1.8724 | 0.7784 | 0.3251 |
| 0.1649 | 25.0 | 59225 | 1.9187 | 0.7974 | 0.3277 |
| 0.1594 | 26.0 | 61594 | 1.9022 | 0.7828 | 0.3220 |
| 0.1338 | 27.0 | 63963 | 1.9303 | 0.7862 | 0.3212 |
| 0.1441 | 28.0 | 66332 | 1.9528 | 0.7845 | 0.3207 |
| 0.129 | 29.0 | 68701 | 1.9676 | 0.7819 | 0.3212 |
| 0.1169 | 30.0 | 71070 | 1.9746 | 0.7828 | 0.3202 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.1+cu102
- Datasets 1.18.3
- Tokenizers 0.12.1
|
7b8c026b263e95c62138fb1fe6ebf8ad
|
0x7f/ddpm-butterflies-128
|
0x7f
| null | 13 | 2 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['huggan/smithsonian_butterflies_subset']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,226 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/0x7f/ddpm-butterflies-128/tensorboard?#scalars)
|
b946c15d2732f516fe5d8289a930630e
|
obokkkk/wav2vec2-base-960h-finetuned_common_voice3
|
obokkkk
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,100 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-960h-finetuned_common_voice3
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
5fc52a59bad8ae661671ddc64197d090
|
sd-concepts-library/ivan-grohar
|
sd-concepts-library
| null | 9 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,038 | false |
### ivan grohar on Stable Diffusion
This is the `<ivan-grohar>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
8464d8225b3da0f4eaf1ab709cd48c05
|
muhtasham/small-mlm-glue-qnli-target-glue-mrpc
|
muhtasham
|
bert
| 10 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,722 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-qnli-target-glue-mrpc
This model is a fine-tuned version of [muhtasham/small-mlm-glue-qnli](https://huggingface.co/muhtasham/small-mlm-glue-qnli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9217
- Accuracy: 0.7770
- F1: 0.8455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3905 | 4.35 | 500 | 0.7540 | 0.7892 | 0.8608 |
| 0.0675 | 8.7 | 1000 | 1.4012 | 0.7892 | 0.8608 |
| 0.0274 | 13.04 | 1500 | 1.5409 | 0.7794 | 0.8454 |
| 0.0189 | 17.39 | 2000 | 1.5464 | 0.7917 | 0.8609 |
| 0.0119 | 21.74 | 2500 | 1.7553 | 0.7794 | 0.8505 |
| 0.0179 | 26.09 | 3000 | 1.7660 | 0.7745 | 0.8492 |
| 0.0128 | 30.43 | 3500 | 1.9217 | 0.7770 | 0.8455 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
143a1d805103e40dadc5cf8a554510da
|
jakka/segformer-b0-finetuned-warehouse-part-1-V2
|
jakka
|
segformer
| 7 | 8 |
transformers
| 0 |
image-segmentation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vision', 'image-segmentation', 'generated_from_trainer']
| true | true | true | 20,573 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-warehouse-part-1-V2
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the jakka/warehouse_part1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2737
- Mean Iou: 0.7224
- Mean Accuracy: 0.8119
- Overall Accuracy: 0.9668
- Per Category Iou: [0.0, 0.9392313580983768, 0.9322932027111482, 0.9772249946988713, 0.8749950826812657, 0.9591121585348171, 0.9803780030124933, 0.8554852055380204, 0.9661475962866876, 0.5609089467958914, 0.0, 0.8095003013989066, 0.7113799121381718, 0.8927260044840537, 0.6133653057361015, 0.8420100377966416, 0.33841086205511367, 0.553361761785151, 0.8141592920353983, 0.8270316181708587]
- Per Category Accuracy: [nan, 0.9727824725573769, 0.9676994291705018, 0.9882968957337019, 0.9679484011220059, 0.9772700079950366, 0.9882492205666621, 0.9252107983136135, 0.9825945071781523, 0.6062795795494159, 0.0, 0.894776445179671, 0.7968855332344613, 0.9522349792248335, 0.6544510171692397, 0.9276157710790738, 0.42203029817249116, 0.5863404454740788, 0.8963814834175524, 0.9193914381006046]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 0.7008 | 1.0 | 787 | 0.2473 | 0.5595 | 0.6448 | 0.9325 | [0.0, 0.8572456184869756, 0.8403481284744914, 0.9524827531570127, 0.7992052152702355, 0.9196710216877864, 0.9471503664300267, 0.6193304552041781, 0.9133086982125345, 0.17558267725303728, 0.0, 0.6344520667741999, 0.3360920970752956, 0.7642426437536942, 0.510575871022846, 0.6056988833269157, 0.021209386281588447, 0.27355691497341356, 0.6138181818181818, 0.40645271873846317] | [nan, 0.9155298033269351, 0.9463379226245591, 0.978836265135544, 0.9240214201112357, 0.9448111967681583, 0.9643622308798924, 0.6930912552699579, 0.9497575640760723, 0.18632531152693993, 0.0, 0.7500919033177098, 0.36409599568558715, 0.8900647437729461, 0.5728964730263244, 0.6549871668851026, 0.02166159025328631, 0.2902301645548354, 0.7353197421153511, 0.4694729147312794] |
| 0.1321 | 2.0 | 1574 | 0.2331 | 0.6221 | 0.7115 | 0.9457 | [0.0, 0.8970560279823083, 0.8791120244598839, 0.9603620467193393, 0.8160602187615088, 0.934767875213888, 0.9616837752836253, 0.7419391385825133, 0.9351874201394574, 0.26717521084051926, 0.0, 0.6985475965645938, 0.43481867741170893, 0.8134984418163408, 0.5459611126448698, 0.7401712453141447, 0.13175924760380514, 0.355121624272543, 0.7060811650388926, 0.6229231428877693] | [nan, 0.951233770160613, 0.9409053657605947, 0.9843213861494523, 0.9219686102230917, 0.9665968250506056, 0.9829729958024298, 0.8238168094655243, 0.9620596605954946, 0.29986351309033543, 0.0, 0.8030913978494624, 0.49467439665633006, 0.909599171191769, 0.5931253087796156, 0.8208142201834863, 0.14682189804424495, 0.3841705499014086, 0.8251147122030551, 0.70800907664895] |
| 0.1085 | 3.0 | 2361 | 0.2457 | 0.6542 | 0.7530 | 0.9521 | [0.0, 0.9079405116712079, 0.8959028018194484, 0.9654330936322201, 0.8358564096747072, 0.942169826126924, 0.967131589172387, 0.7785683188874377, 0.942506044201895, 0.3544242514524058, 0.0, 0.7247706422018348, 0.5044915351836923, 0.8273089178892802, 0.5630444261421442, 0.7399785788281565, 0.21738423517169614, 0.46725284186024263, 0.7218755768875762, 0.7280122150607375] | [nan, 0.9545620491089126, 0.9497321958018098, 0.9837544714508515, 0.9402501375924134, 0.9686463320401577, 0.9809467909731419, 0.8694886440908473, 0.9735407105395524, 0.3936199755387097, 0.0, 0.8558151824280856, 0.5906026695429419, 0.9157369138435157, 0.6097401660523865, 0.8630406290956749, 0.2679143956396281, 0.5182902566913956, 0.8517163268862171, 0.8205229733639949] |
| 0.8409 | 4.0 | 3148 | 0.2533 | 0.6749 | 0.7760 | 0.9559 | [0.0, 0.912375840411698, 0.904072054206276, 0.9676067299522242, 0.900289256120933, 0.9448264254043457, 0.9706472863960092, 0.7942658684379895, 0.9498265874428659, 0.5556284571729604, 0.0, 0.743214707471828, 0.529188361408882, 0.7269154778675782, 0.5697874335729916, 0.7702618169892564, 0.2288491765188273, 0.5089612784265519, 0.757448678510892, 0.7646070737475812] | [nan, 0.9601569621727435, 0.9525397945710891, 0.9830820784511696, 0.9462795897530819, 0.9732812778343284, 0.9810361205428978, 0.8895280837753298, 0.9743959070958451, 0.6854951638729194, 0.0, 0.8531327543424317, 0.5823783200755023, 0.9177828280607646, 0.6184135395216047, 0.8657506006989952, 0.26841535748637385, 0.5491586570344761, 0.8759801359121798, 0.8665306184609293] |
| 0.0655 | 5.0 | 3935 | 0.2164 | 0.6815 | 0.7909 | 0.9577 | [0.0, 0.9195724102825147, 0.8817887152896982, 0.9692666162636345, 0.90446655617651, 0.9477266300807918, 0.972197851990263, 0.8006212298550464, 0.9526181996158507, 0.48675750740382695, 0.0, 0.7544064333927534, 0.589975775752682, 0.8568833610473964, 0.5739430151581254, 0.7804109001873066, 0.2738491187715644, 0.46180522107696753, 0.7493122891746226, 0.754828899421902] | [nan, 0.9629768162749704, 0.9511904548979574, 0.9855793956741679, 0.9532853326979632, 0.9705567416728694, 0.9856702233410021, 0.9070277437780497, 0.9761803883026475, 0.7497090051817757, 0.0, 0.8653903593419723, 0.689564513954429, 0.9349779882164135, 0.6119830537374903, 0.9072670926168632, 0.3530779095864059, 0.5086786980626564, 0.8741215078120462, 0.8391483788434887] |
| 0.0568 | 6.0 | 4722 | 0.2803 | 0.6876 | 0.7839 | 0.9591 | [0.0, 0.9166100071412383, 0.913602419181271, 0.9710201737288663, 0.8563050555469198, 0.9497657746314072, 0.9730697054916811, 0.8143688646719719, 0.9549812903957364, 0.460486150973965, 0.0, 0.7634781269254467, 0.6136748147716002, 0.8542174198928293, 0.5922937831600485, 0.8066394260877113, 0.28399126278134795, 0.5207639813581891, 0.7629174644376197, 0.7438457521999924] | [nan, 0.9601927982852421, 0.9660710264704008, 0.982455068550298, 0.957830657460364, 0.9688535013815731, 0.9819961506837456, 0.893842649258806, 0.9749506995826178, 0.5071640856263331, 0.0, 0.8540977391783844, 0.7091141971147364, 0.9317785850902456, 0.653052819349169, 0.8880378986456968, 0.35953029817249116, 0.553305686470427, 0.862098507289307, 0.8895268263710157] |
| 0.8994 | 7.0 | 5509 | 0.2743 | 0.6868 | 0.7764 | 0.9606 | [0.0, 0.92180556388016, 0.9171201062365498, 0.9721111956032598, 0.8587950800137758, 0.9513526631552707, 0.9756092701000854, 0.819792597945916, 0.9576544961199075, 0.4512109977539036, 0.0, 0.7723053199691596, 0.61351217088922, 0.8696959538394335, 0.5947007494875557, 0.8068989910272162, 0.2400942828140323, 0.49048112386556714, 0.772383338067815, 0.7496112574696395] | [nan, 0.9644998510561574, 0.9609472275076806, 0.9854828942497743, 0.9565172529563908, 0.9753485051500238, 0.9840922427646661, 0.8947674418604651, 0.974328764760461, 0.49258184783186704, 0.0, 0.8630410807830162, 0.6660374814615073, 0.9410600831006661, 0.6446391486645419, 0.8876351572739187, 0.2796369028534787, 0.5232773027508334, 0.8685891851077423, 0.8883389427836073] |
| 0.0757 | 8.0 | 6296 | 0.2245 | 0.7038 | 0.8009 | 0.9625 | [0.0, 0.9246349181813107, 0.9204571437331909, 0.9735757462990084, 0.8677796689121399, 0.9529629595462734, 0.9762280475446855, 0.8249549577060494, 0.9591099123245741, 0.6276133447390932, 0.0, 0.7755030368136181, 0.6490189248809939, 0.8729206918730364, 0.598100700980074, 0.8000277974172574, 0.27374031814774713, 0.5049971433066432, 0.7770387696167466, 0.7981819415236415] | [nan, 0.964623037692871, 0.9637122903759715, 0.9863849456780516, 0.9537638293913148, 0.974798022498043, 0.985726579790157, 0.9184958520331837, 0.980103295010109, 0.7586190597174544, 0.0, 0.8624896608767576, 0.7536739921801268, 0.9379994558884956, 0.6446181625809385, 0.9037175076452599, 0.32931227957678744, 0.5392729877180727, 0.863477957832375, 0.8959383518876689] |
| 0.0638 | 9.0 | 7083 | 0.2660 | 0.7091 | 0.8064 | 0.9632 | [0.0, 0.9247942993361187, 0.9227547653133065, 0.9737952169757659, 0.8675395458562903, 0.954005651357167, 0.9771936329793919, 0.832432130071599, 0.960664758331238, 0.6439555818513429, 0.0, 0.7800093558353167, 0.6503190735050816, 0.8771838558892437, 0.6000063410406786, 0.8135397086825815, 0.29345229389108285, 0.5278915956856804, 0.7979207701237885, 0.7849771726504039] | [nan, 0.9696983271254734, 0.9626331855239437, 0.9865491477141318, 0.9580933383611586, 0.9736782563602464, 0.9877136372491695, 0.9107507139942881, 0.9774734570720269, 0.778129006717992, 0.0, 0.8715651135005974, 0.7419441822839423, 0.9522322311869326, 0.6453719127503574, 0.9070076998689384, 0.36183472266752165, 0.5638987382066087, 0.8882354649474357, 0.8850494190030915] |
| 0.1028 | 10.0 | 7870 | 0.2753 | 0.7045 | 0.7986 | 0.9632 | [0.0, 0.9310677916035094, 0.9231154731835156, 0.9742966471140867, 0.8659672807905657, 0.9548025101399095, 0.9761885400996432, 0.8359586760218701, 0.9606324687638941, 0.536304571449891, 0.0, 0.7861687315154533, 0.6648749707875672, 0.8782393648813203, 0.6028230645967004, 0.8034017821150734, 0.2798240884275797, 0.5292981433685788, 0.7976529535864979, 0.7897882016975595] | [nan, 0.9671696414372969, 0.9640722977320454, 0.9864307028133905, 0.9566418983913256, 0.9766712626661613, 0.984078186494131, 0.917516659866721, 0.9804665003157427, 0.5945275248601157, 0.0, 0.8886304108078301, 0.7671565322906836, 0.945889759711566, 0.6500072139662386, 0.9114992900830057, 0.33277893555626803, 0.5621391244374099, 0.8784050647615729, 0.9097665351872439] |
| 0.098 | 11.0 | 8657 | 0.2029 | 0.7052 | 0.8014 | 0.9640 | [0.0, 0.9288737885707921, 0.9265083379180753, 0.9747097980123621, 0.8738478537660755, 0.9558379241305062, 0.9781696214462526, 0.8391837240652649, 0.9626716931455067, 0.507780252899168, 0.0, 0.7878061172645057, 0.6769843155893536, 0.8815102118136605, 0.6056046400027283, 0.8269347543218291, 0.3132485690006253, 0.5154277002618235, 0.7927511930865472, 0.7569567975718071] | [nan, 0.9711631282238503, 0.964815472153087, 0.9853689377873769, 0.9652020663968313, 0.9754185940822899, 0.9867780413729902, 0.9206854345165238, 0.9811350296034029, 0.5495104787677182, 0.0, 0.8906350519253745, 0.7681677227989753, 0.9430888220810342, 0.65217140383783, 0.9110078090869376, 0.3914916639948702, 0.5500605696196935, 0.8924609397688331, 0.9267167202229566] |
| 0.0734 | 12.0 | 9444 | 0.2171 | 0.7126 | 0.8001 | 0.9648 | [0.0, 0.9309643707918894, 0.9277494647914695, 0.9750904306170505, 0.8777832954332417, 0.9566409475731096, 0.9780693213049435, 0.8436550838167809, 0.9635515941347027, 0.527304314900299, 0.0, 0.7909202018197202, 0.6909584834347133, 0.8836639196984207, 0.6084447805077513, 0.8287813112544289, 0.31069205419260343, 0.5403587067765045, 0.7955642033577429, 0.8211277996631356] | [nan, 0.9680901815771025, 0.9655377799057193, 0.9852963747008175, 0.9662340833391586, 0.9756774116913669, 0.9890014280908129, 0.9132224942200462, 0.9813789993824062, 0.5595195188097869, 0.0, 0.8697959746346843, 0.7887285964675745, 0.9477302580957196, 0.6557731404362482, 0.9149260048055919, 0.374058191728118, 0.5695666398450833, 0.8786809548701865, 0.8983598068927706] |
| 0.0839 | 13.0 | 10231 | 0.2606 | 0.7139 | 0.8056 | 0.9651 | [0.0, 0.932934590872574, 0.928599894716927, 0.9759876131918817, 0.8695983139625728, 0.9571779321732448, 0.979228463067019, 0.8446447574729073, 0.9630766038435438, 0.47072541703248466, 0.0, 0.7968195631480623, 0.6967972782731112, 0.8867456411969523, 0.6076684496270689, 0.8274634197517912, 0.3560522933191209, 0.5582305522639651, 0.8036840005319856, 0.8219356251968073] | [nan, 0.970161956830923, 0.9673467595439784, 0.9869340313021197, 0.9654732145230638, 0.9756083312329464, 0.9874815117348184, 0.9121141030871753, 0.9832381474966617, 0.50686275089071, 0.0, 0.8991361088135281, 0.8007954698665228, 0.9482970409127882, 0.6487891466970965, 0.9152673110528615, 0.4551538954793203, 0.5915043371384613, 0.8774612301794738, 0.914289630385453] |
| 0.0797 | 14.0 | 11018 | 0.2504 | 0.7153 | 0.8044 | 0.9655 | [0.0, 0.9353593794015038, 0.9288667661318105, 0.9762064564453578, 0.8718886319160292, 0.9576685946960725, 0.9788546612617008, 0.8472608735210976, 0.9642969355331718, 0.5361721760842425, 0.0, 0.8004189668257286, 0.696640611014977, 0.8853084044449696, 0.6099045788314064, 0.8344863725117123, 0.3254310344827586, 0.5323734971095841, 0.8050435956126539, 0.8204823185898129] | [nan, 0.9668112803123117, 0.9681903691382433, 0.9879581433175818, 0.9650443397090228, 0.9762644155033261, 0.9866578405548627, 0.9181626546987625, 0.9814820281384267, 0.5836381147080894, 0.0, 0.8844717856814631, 0.7870432789537549, 0.9470982093785038, 0.6547561898016377, 0.9131239078200087, 0.39335524206476435, 0.5610603662472479, 0.8835162920369403, 0.9243561823249014] |
| 0.0606 | 15.0 | 11805 | 0.2363 | 0.7209 | 0.8122 | 0.9661 | [0.0, 0.9354450021238048, 0.9300759788666999, 0.9766100423179009, 0.8739351769905989, 0.9580569741305669, 0.9795622398211299, 0.8496875639431477, 0.9646763306438436, 0.6043151650835981, 0.0, 0.8018012422360249, 0.7004677380666826, 0.889289794511031, 0.610767874342205, 0.8325289843013258, 0.33953698039089414, 0.5566040090865972, 0.7993623498974272, 0.8161583186067531] | [nan, 0.966786642984969, 0.965287953144928, 0.9879603875367537, 0.9664012618135025, 0.9766460508200225, 0.9889968302453108, 0.9177070583435333, 0.9825186826442273, 0.650711681743251, 0.0, 0.8897849462365591, 0.7874477551570715, 0.9497445698771078, 0.655411130494091, 0.9220183486238532, 0.42261141391471624, 0.5914689680174724, 0.8883080676075972, 0.9213864733563804] |
| 0.0532 | 16.0 | 12592 | 0.2531 | 0.7201 | 0.8074 | 0.9662 | [0.0, 0.9383203952011292, 0.9288414046194093, 0.9769141389017822, 0.8756205335515858, 0.9582358666094781, 0.979632260873732, 0.8522102747909199, 0.9655114623669192, 0.6115704722763623, 0.0, 0.8053745416448402, 0.7045095417527653, 0.8906375387790608, 0.6007837805741991, 0.8399368744136342, 0.33049747893639037, 0.5151462046865611, 0.8091001625973271, 0.8195206947575124] | [nan, 0.9678438083036752, 0.9684728717259394, 0.9879746009248427, 0.9684402878462824, 0.9766889829923047, 0.9883229174617107, 0.9215762273901809, 0.9820408723178519, 0.6655775287006565, 0.0, 0.8831104677878872, 0.7814480248078738, 0.9439503319629784, 0.6414396453351872, 0.9228033529925732, 0.40323420968259055, 0.5458428019417647, 0.8887436835685659, 0.9025173994487001] |
| 0.0862 | 17.0 | 13379 | 0.2458 | 0.7201 | 0.8087 | 0.9665 | [0.0, 0.9368370402512427, 0.9309393106006786, 0.9769932787053442, 0.8747985979138234, 0.95879411739136, 0.9800136137207117, 0.8526248910947767, 0.9651962916423883, 0.5741264468224503, 0.0, 0.8066815029500052, 0.7084107667406031, 0.8910943581653369, 0.6137487567405265, 0.843379759286757, 0.32885159559677446, 0.5243792475829478, 0.8126121336965911, 0.8231331714477782] | [nan, 0.9768073159423666, 0.9678409097683983, 0.9877789798203552, 0.9673405331004518, 0.977145821644341, 0.9876622727465598, 0.9216680266557867, 0.9832398839363699, 0.6213226822336585, 0.0, 0.8952934013417885, 0.7966158824322502, 0.946850198957944, 0.6577528276561605, 0.9188715050240279, 0.4028735171529336, 0.5553570954877843, 0.887857931114596, 0.9137413764220337] |
| 0.057 | 18.0 | 14166 | 0.2807 | 0.7169 | 0.8024 | 0.9665 | [0.0, 0.9391255338059006, 0.9316246290236013, 0.9771178536356643, 0.8736374236266327, 0.9587095139235466, 0.9802820999385629, 0.8534991833144867, 0.965491782119557, 0.5173244886677723, 0.0, 0.8079528780010615, 0.7036495460915129, 0.8919428858888571, 0.6128251272343798, 0.8423749359527112, 0.3030539267193167, 0.5387041043962495, 0.8154057368308808, 0.8249477907232359] | [nan, 0.9703254590941974, 0.967385397276143, 0.9883638482723315, 0.9660909281555922, 0.9783173801174915, 0.987878896953218, 0.9238406092751258, 0.9828454227159885, 0.5529433313441302, 0.0, 0.8918872346291701, 0.7785492786841041, 0.9525571866687186, 0.6544903660759959, 0.9202435561380515, 0.3583279897403014, 0.5679750294005819, 0.8882935470755648, 0.9144114645995461] |
| 0.27 | 19.0 | 14953 | 0.2799 | 0.7210 | 0.8089 | 0.9668 | [0.0, 0.9392661644355319, 0.932096490765189, 0.9772444850416163, 0.8748583460799624, 0.959030800837604, 0.9803660417493171, 0.8549763601588193, 0.9661359625948338, 0.5489573339508828, 0.0, 0.8082856800928263, 0.707609022556391, 0.8930480213758131, 0.6125057936760998, 0.8439663143164156, 0.3240623821315535, 0.5560068921314832, 0.813374539715939, 0.8289533147998521] | [nan, 0.9703971313191945, 0.9680462515437895, 0.9881404237858805, 0.9683475421909045, 0.9777759016962746, 0.988822374850258, 0.9210152318781449, 0.9816258632275899, 0.588252672130082, 0.0, 0.8922778237294366, 0.7930430093029527, 0.9508458460659089, 0.6517263239814098, 0.9221548711227611, 0.3959802821417121, 0.5906377936742327, 0.8980803856653308, 0.9218433516592297] |
| 0.0369 | 20.0 | 15740 | 0.2737 | 0.7224 | 0.8119 | 0.9668 | [0.0, 0.9392313580983768, 0.9322932027111482, 0.9772249946988713, 0.8749950826812657, 0.9591121585348171, 0.9803780030124933, 0.8554852055380204, 0.9661475962866876, 0.5609089467958914, 0.0, 0.8095003013989066, 0.7113799121381718, 0.8927260044840537, 0.6133653057361015, 0.8420100377966416, 0.33841086205511367, 0.553361761785151, 0.8141592920353983, 0.8270316181708587] | [nan, 0.9727824725573769, 0.9676994291705018, 0.9882968957337019, 0.9679484011220059, 0.9772700079950366, 0.9882492205666621, 0.9252107983136135, 0.9825945071781523, 0.6062795795494159, 0.0, 0.894776445179671, 0.7968855332344613, 0.9522349792248335, 0.6544510171692397, 0.9276157710790738, 0.42203029817249116, 0.5863404454740788, 0.8963814834175524, 0.9193914381006046] |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
1ac4e2463740d2cf563a0a94cf24c123
|
sayakpaul/distilbert-base-uncased-finetuned-emotion-lr-0.0006-wd-0003
|
sayakpaul
|
distilbert
| 10 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,398 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-lr-0.0006-wd-0003
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4198
- Accuracy: 0.8875
- F1: 0.8889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1911 | 1.0 | 125 | 0.6098 | 0.808 | 0.7921 |
| 0.4819 | 2.0 | 250 | 0.4198 | 0.8875 | 0.8889 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.10.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
17eff4fc62c21d8f6407b5aa147ede01
|
kowsiknd/bert-base-uncased-sst2
|
kowsiknd
|
bert
| 12 | 11 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['sst2']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,322 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9312
- Accuracy: 0.876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.0209 | 0.836 |
| No log | 2.0 | 250 | 1.0430 | 0.85 |
| No log | 3.0 | 375 | 0.9312 | 0.876 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
d7f7fa5bf779e11c33b9af2d5033938d
|
xrverse/xlm-roberta-base-finetuned-panx-de
|
xrverse
|
xlm-roberta
| 12 | 4 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,314 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1356
- F1: 0.8600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2525 | 1.0 | 525 | 0.1673 | 0.8294 |
| 0.1298 | 2.0 | 1050 | 0.1381 | 0.8510 |
| 0.0839 | 3.0 | 1575 | 0.1356 | 0.8600 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
1332268efc9d1b775c0d60f863cc912a
|
Milos/slovak-gpt-j-405M
|
Milos
|
gptj
| 6 | 6 |
transformers
| 0 |
text-generation
| true | false | false |
gpl-3.0
|
['sk']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Slovak GPT-J', 'pytorch', 'causal-lm']
| false | true | true | 8,206 | false |
# Slovak GPT-J-405M
Slovak GPT-J-405M is the second model released in Slovak GPT-J series after its smaller variant [Slovak GPT-J-162M](https://huggingface.co/Milos/slovak-gpt-j-162M). Since then a larger [Slovak GPT-J-1.4B](https://huggingface.co/Milos/slovak-gpt-j-1.4B) was released.
## Model Description
Model is based on [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax/) and has over 405M trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|----------------------------------------------------------------------------------------------------------------------------------------|
| \\(n_{parameters}\\) | 405,677,136 |
| \\(n_{layers}\\) | 24 |
| \\(d_{model}\\) | 1024 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50256 (same tokenizer as GPT-2/3†) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<p><strong>†</strong> ByteLevelBPETokenizer was trained on the same Slovak corpus.</p></figure>
## Training data
Slovak GPT-J models were trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model.
The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate corpus, the model still might generate sensitive content or leak sensitive information.
## Training procedure
This model was trained for a bit more than 36.5 billion tokens over 69,001 steps on TPU v3-8 pod. The cross-entropy validation loss at the last step was `2.821`.
## Intended Use
Same as the original GPT-J, Slovak GPT-J learns an inner representation of the language that can be used to extract features useful for downstream tasks, however, the intended use is text generation from a prompt.
### How to use
This model along with the tokenizer can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Milos/slovak-gpt-j-405M")
model = AutoModelForCausalLM.from_pretrained("Milos/slovak-gpt-j-405M")
```
When generating a prompt keep in mind these three things, and you should be good to go:
1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes "Mám rád slovenčinu" (no space after `slovenčinu`) and "Mám rád slovenčinu " (trailing space after `slovenčinu`), i.e `[12805, 2872, 46878]` != `[12805, 2872, 46878, 221]`.
2. Always use good ol' US English primary double quotation marks, i.e. `""` instead of `„“`.
3. In case of a new line always enter `\n\n` instead of a single `\n`
To illustrate an example of a basic text generation:
```
>>> prompt = "Tradičné jedlo na Orave sú"
>>> encoded_input = tokenizer(prompt, return_tensors='pt')
>>> output = model.generate(**encoded_input)
>>> tokenizer.decode(output[0])
'Tradičné jedlo na Orave sú bryndzové halušky\n\nNa Orave sa v minulosti varilo viac druhov'
```
### Capabilities, Limitations, and Biases
The capability of this particular model is somewhat decent despite its small size totalling 405M parameters. With relative ease it can manage to generate interesting and grammatically correct content.
For example, you can try few of the following prompts. (For sake of simplicity, I have omitted all the boilerplate code and swapped `\n` for new lines).
Try generating "How to" articles:
```
PROMPT
Ako napísať pôsobivú esej krok za krokom
OUTPUT
Ako napísať pôsobivú esej krok za krokom
V tomto článku sa dozviete, ako napísať esej, ktorá bude mať v sebe niečo, čo zaujme a bude sa vám páčiť.
V prvom rade si musíte uvedomiť, že esej je vlastne písaný text, ktorý má byť napísaný na počítači.'
```
However, relying on the model to produce factually correct information isn't recommended.
Or try to generate chat conversations:
```
PROMPT
Peter: Ako sa mas?
Lucia: Ale celkom dobre. Co si robil?
Peter:
OUTPUT
Peter: Ako sa mas?
Lucia: Ale celkom dobre. Co si robil?
Peter: No, bol som na chate.
Lucia: A co si tam robil?
Peter: No, bol som tam s kamošmi.
Lucia: A co si tam robil?
Peter: No, bol som tam s kamošmi.
```
Apparently either Peters are more likely to act suspiciously in this particular context or there's a problem with the model. Let's assume that the second explanation may hold some merit. In general, GPT models can (and often will) get into a repeating cycle of generating same content. This is a common problem beyond the scope of this README, however, see [generate's documentation](https://huggingface.co/docs/transformers/master/en/main_classes/model#transformers.generation_utils.GenerationMixin.generate) on how to introduce a frequency/repetition penalty.
Since the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech:
```
>>> prompt = "Věta nesmí být sprostá a musí být zcela"
>>> encoded_input = tokenizer(prompt, return_tensors='pt')
>>> output = model.generate(**encoded_input, max_length=16)
>>> tokenizer.decode(output[0])
'Věta nesmí být sprostá a musí být zcela pravdivá.'
```
## Citation and Related Information
This was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now :)
If you use this model or have any questions about it feel free to hit me up at [twitter](https://twitter.com/miloskondela) or check out my [github](https://github.com/kondela) profile.
### BibTeX entry
To cite this model:
```bibtex
@misc{slovak-gpt-j-405m,
author = {Kondela, Milos},
title = {{Slovak GPT-J-405M}},
howpublished = {\url{https://huggingface.co/Milos/slovak-gpt-j-405M}},
year = 2022,
month = February
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
## Acknowledgements
This project was generously supported by [TPU Research Cloud (TRC) program](https://sites.research.google/trc/about/). Shoutout also goes to [Ben Wang](https://github.com/kingoflolz) and great [EleutherAI community](https://www.eleuther.ai/).
|
f5a94a4d190a25c42644b04ed6ff3eff
|
waynedsouza/distilbert-base-uncased-gc-art1e
|
waynedsouza
|
distilbert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,262 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-gc-art1e
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0928
- Accuracy: 0.982
- F1: 0.9763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0226 | 1.0 | 32 | 0.0928 | 0.982 | 0.9763 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
1cca43cd77099e1c94396683d5a5cdb9
|
wietsedv/xlm-roberta-base-ft-udpos28-tr
|
wietsedv
|
xlm-roberta
| 8 | 507 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
|
['tr']
|
['universal_dependencies']
| null | 2 | 1 | 0 | 1 | 0 | 0 | 0 |
['part-of-speech', 'token-classification']
| true | true | true | 567 | false |
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Turkish
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-tr")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-tr")
```
|
be5187052fd4b74d2abaa2ddb74b16b3
|
ericntay/stbl_clinical_bert_ft_rs10
|
ericntay
|
bert
| 12 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,880 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stbl_clinical_bert_ft_rs10
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0846
- F1: 0.9297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2834 | 1.0 | 101 | 0.0930 | 0.8446 |
| 0.0669 | 2.0 | 202 | 0.0732 | 0.8938 |
| 0.033 | 3.0 | 303 | 0.0676 | 0.9119 |
| 0.0168 | 4.0 | 404 | 0.0703 | 0.9219 |
| 0.0084 | 5.0 | 505 | 0.0742 | 0.9245 |
| 0.006 | 6.0 | 606 | 0.0772 | 0.9252 |
| 0.0033 | 7.0 | 707 | 0.0844 | 0.9239 |
| 0.0023 | 8.0 | 808 | 0.0855 | 0.9272 |
| 0.0019 | 9.0 | 909 | 0.0843 | 0.9296 |
| 0.0013 | 10.0 | 1010 | 0.0878 | 0.9262 |
| 0.0012 | 11.0 | 1111 | 0.0857 | 0.9266 |
| 0.0008 | 12.0 | 1212 | 0.0846 | 0.9297 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
08c85408080129b3995a07efe23105fe
|
stevhliu/my_awesome_wnut_model
|
stevhliu
|
distilbert
| 22 | 363 |
transformers
| 0 |
token-classification
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,836 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# stevhliu/my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1210
- Validation Loss: 0.2698
- Train Precision: 0.5099
- Train Recall: 0.3995
- Train F1: 0.4480
- Train Accuracy: 0.9444
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.3233 | 0.3099 | 0.4155 | 0.2117 | 0.2805 | 0.9333 | 0 |
| 0.1600 | 0.2743 | 0.5111 | 0.3589 | 0.4216 | 0.9416 | 1 |
| 0.1210 | 0.2698 | 0.5099 | 0.3995 | 0.4480 | 0.9444 | 2 |
### Framework versions
- Transformers 4.22.2
- TensorFlow 2.8.2
- Datasets 2.5.1
- Tokenizers 0.12.1
|
678db3424ff28b36d1b2fc4ed6e2be1c
|
staka/fugumt-en-ja
|
staka
|
marian
| 9 | 6,340 |
transformers
| 11 |
translation
| true | false | false |
cc-by-sa-4.0
|
['en', 'ja']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,192 | false |
# FuguMT
This is a translation model using Marian-NMT.
For more details, please see [my repository](https://github.com/s-taka/fugumt).
* source language: en
* target language: ja
### How to use
This model uses transformers and sentencepiece.
```python
!pip install transformers sentencepiece
```
You can use this model directly with a pipeline:
```python
from transformers import pipeline
fugu_translator = pipeline('translation', model='staka/fugumt-en-ja')
fugu_translator('This is a cat.')
```
If you want to translate multiple sentences, we recommend using [pySBD](https://github.com/nipunsadvilkar/pySBD).
```python
!pip install transformers sentencepiece pysbd
import pysbd
seg_en = pysbd.Segmenter(language="en", clean=False)
from transformers import pipeline
fugu_translator = pipeline('translation', model='staka/fugumt-en-ja')
txt = 'This is a cat. It is very cute.'
print(fugu_translator(seg_en.segment(txt)))
```
### Eval results
The results of the evaluation using [tatoeba](https://tatoeba.org/ja)(randomly selected 500 sentences) are as follows:
|source |target |BLEU(*1)|
|-------|-------|--------|
|en |ja |32.7 |
(*1) sacrebleu --tokenize ja-mecab
|
24deb6984882d91b782dbb83ed8c67d0
|
sgangireddy/whisper-medium-cv-fi
|
sgangireddy
|
whisper
| 23 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['fi']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,542 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper medium Finnish CV
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 fi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3010
- Wer: 15.7181
## Model description
The Model is fine-tuned for 1000 steps/updates on CV11 Finnish train+valiation data.
- Zero-shot - 18.8 (CV9 test data, even on CV11 the WER is closer a bit higher than this)
- Fine-tuned - 15.71 (CV11 test data)
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0009 | 19.01 | 1000 | 0.3010 | 15.7181 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
88f2de86ac702ce84d75ef20adbb4f4c
|
Fiacre/ComicsBlend
|
Fiacre
| null | 3 | 0 | null | 8 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
[]
| false | true | true | 4,258 | false |
# How to use:
Download "ComicsBlend.ckpt" and add it to your model folder. Important: add all these keywords to your prompt: ComplexLA style, nvinkpunk, marioalberti artstyle, ghibli style
# Individual components of the blend:
This is an equal part blend of four models at 25% Complex-Lineart, 25% Inkpunk-Diffusion, 25% Comic-Diffusion, 25% Ghibli Diffusion.
# Link to the constituent models:
https://huggingface.co/Conflictx/Complex-Lineart
https://huggingface.co/Envvi/Inkpunk-Diffusion
https://huggingface.co/ogkalu/Comic-Diffusion
https://huggingface.co/nitrosocke/Ghibli-Diffusion
# Prompts
Important: Use all the prompt from the constituant models at the same time: ComplexLA style, nvinkpunk, marioalberti artstyle, ghibli style
# Sample images:













# License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license [here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
65302f2c5b4bd0890e748603d8415158
|
kasrahabib/500-1000-bucket-finetunned
|
kasrahabib
|
bert
| 10 | 5 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,724 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kasrahabib/500-100-bucket-finetunned
This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0050
- Validation Loss: 0.1358
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3514 | 0.1493 | 0 |
| 0.1166 | 0.1159 | 1 |
| 0.0628 | 0.1066 | 2 |
| 0.0282 | 0.1249 | 3 |
| 0.0245 | 0.1338 | 4 |
| 0.0181 | 0.1298 | 5 |
| 0.0103 | 0.1246 | 6 |
| 0.0085 | 0.1303 | 7 |
| 0.0044 | 0.1343 | 8 |
| 0.0050 | 0.1358 | 9 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
5d8b927106ffc7ae6198de2f5d881608
|
codeparrot/codeparrot-small-text-to-code
|
codeparrot
|
gpt2
| 6 | 137 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
|
['code']
|
['codeparrot/codeparrot-clean', 'codeparrot/github-jupyter-text-to-code']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['code', 'gpt2', 'generation']
| false | true | true | 495 | false |
# CodeParrot 🦜 small for text-t-code generation
This model is [CodeParrot-small](https://huggingface.co/codeparrot/codeparrot-small) (from `branch megatron`) Fine-tuned on [github-jupyter-text-to-code](https://huggingface.co/datasets/codeparrot/github-jupyter-text-to-code), a dataset where the samples are a succession of docstrings and their Python code, originally extracted from Jupyter notebooks parsed in this [dataset](https://huggingface.co/datasets/codeparrot/github-jupyter-parsed).
|
56e3c156e01574c2477036edaf2c5607
|
admruul/hassan
|
admruul
| null | 27 | 4 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image']
| false | true | true | 2,945 | false |
# HassanBlend1.4
I am hassan, I created HassansBlend, the latest version currently is 1.4. I continue to iterate and improve on this model over time. Feel free to check out our discord or rentry page for more examples with prompts and outputs generated.
I have also some custom created content such as enhancement hypernetworks/embeddings etc for patreons or KoFi subscribers only on my pages below
<b> Links </b><br>
<b>Patreon</b>
<a href="https://www.patreon.com/sd_hassan" target="_blank"><img src="https://i.imgur.com/sR32SqJ.jpg"></img></a>
<b>KoFi</b>
<a href="https://ko-fi.com/sdhassan" target="_blank"><img src="https://i.imgur.com/0P7CTN4.png"></img></a>
<b>Discord</b>
<a href="https://discord.gg/sdmodelers" target="_blank"><img src="https://i.imgur.com/HC1iHwg.png"></img></a>
### Quicklinks:
* [Latest Setup](https://rentry.org/sdhassan#current-setup)
* [HassanBlend Model Finetune Updates](https://rentry.org/sdhassan#hassanblend-finetuning-updates)
* [Latest Patreon Posts](https://rentry.org/sdhassan#patreon-posts)
* [Models](https://rentry.org/sdhassan#merged-models)
* [HassanBlend1.4](https://rentry.org/sdhassan#hassanblend14-downloads)
* [Prompts](https://rentry.org/sdhassan#prompts)
* [Photorealistic Tips](https://rentry.org/sdhassan#tips-for-photorealistic-images)
* [Embeddings](https://rentry.org/sdhassan#embeddings)
* [Hypernetworks](https://rentry.org/sdhassan#hypernetworks)
* [Wildcards](https://rentry.org/sdhassan#wildcards-i-made)
* [MyTools](https://rentry.org/sdhassan#my-tools)
* [Settings I use](https://rentry.org/sdhassan#settings)
Model details and examples with sample prompts: https://rentry.org/sdhassan
# Gradio Demo
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run hassanblend1.4:
[](https://huggingface.co/spaces/akhaliq/hassanblend1.4)
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
bea8dca456a48b134a02a9db88bfdfce
|
Anjoe/kant-gpt2
|
Anjoe
|
gpt2
| 8 | 4 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,248 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kant-gpt2
This model is a fine-tuned version of [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 22
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.3257 | 1.0 | 1825 | 3.2231 |
| 2.9885 | 2.0 | 3650 | 3.0069 |
| 2.7955 | 3.0 | 5475 | 2.8440 |
| 2.5748 | 4.0 | 7300 | 2.7059 |
| 2.3545 | 5.0 | 9125 | 2.5806 |
| 2.1759 | 6.0 | 10950 | 2.4618 |
| 1.9697 | 7.0 | 12775 | 2.3553 |
| 1.7778 | 8.0 | 14600 | 2.2517 |
| 1.6192 | 9.0 | 16425 | 2.1599 |
| 1.4675 | 10.0 | 18250 | 2.0895 |
| 1.3195 | 11.0 | 20075 | 2.0138 |
| 1.2012 | 12.0 | 21900 | 1.9602 |
| 1.0828 | 13.0 | 23725 | 1.9097 |
| 0.9926 | 14.0 | 25550 | 1.8720 |
| 0.9076 | 15.0 | 27375 | 1.8426 |
| 0.8336 | 16.0 | 29200 | 1.8214 |
| 0.7649 | 17.0 | 31025 | 1.8058 |
| 0.7208 | 18.0 | 32850 | 1.7980 |
| 0.6798 | 19.0 | 34675 | 1.7938 |
| 0.647 | 20.0 | 36500 | 1.7969 |
| 0.6226 | 21.0 | 38325 | 1.7975 |
| 0.601 | 22.0 | 40150 | 1.8022 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
0eabe1368f4cf29abd80629e4c57a0df
|
muhtasham/bert-mini-mlm-finetuned-imdb
|
muhtasham
|
bert
| 6 | 6 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,014 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-mini-mlm-finetuned-imdb
This model is a fine-tuned version of [google/bert_uncased_L-4_H-256_A-4](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.2058 | 0.64 | 500 | 2.9411 |
| 3.1048 | 1.28 | 1000 | 2.9042 |
| 3.0631 | 1.92 | 1500 | 2.8780 |
| 3.0197 | 2.56 | 2000 | 2.8667 |
| 3.0071 | 3.2 | 2500 | 2.8503 |
| 2.9886 | 3.84 | 3000 | 2.8319 |
| 2.9577 | 4.48 | 3500 | 2.8127 |
| 2.9498 | 5.12 | 4000 | 2.8080 |
| 2.9301 | 5.75 | 4500 | 2.7894 |
| 2.9229 | 6.39 | 5000 | 2.7912 |
| 2.9027 | 7.03 | 5500 | 2.7874 |
| 2.8961 | 7.67 | 6000 | 2.7785 |
| 2.8869 | 8.31 | 6500 | 2.7619 |
| 2.8793 | 8.95 | 7000 | 2.7607 |
| 2.8729 | 9.59 | 7500 | 2.7581 |
| 2.8523 | 10.23 | 8000 | 2.7593 |
| 2.8525 | 10.87 | 8500 | 2.7433 |
| 2.8403 | 11.51 | 9000 | 2.7505 |
| 2.8318 | 12.15 | 9500 | 2.7444 |
| 2.8314 | 12.79 | 10000 | 2.7352 |
| 2.8136 | 13.43 | 10500 | 2.7334 |
| 2.8161 | 14.07 | 11000 | 2.7280 |
| 2.7955 | 14.71 | 11500 | 2.7342 |
| 2.7951 | 15.35 | 12000 | 2.7237 |
| 2.7878 | 15.98 | 12500 | 2.7171 |
| 2.7816 | 16.62 | 13000 | 2.7160 |
| 2.7805 | 17.26 | 13500 | 2.7120 |
| 2.7776 | 17.9 | 14000 | 2.7078 |
| 2.7661 | 18.54 | 14500 | 2.7086 |
| 2.7678 | 19.18 | 15000 | 2.7017 |
| 2.7613 | 19.82 | 15500 | 2.7015 |
| 2.7516 | 20.46 | 16000 | 2.6958 |
| 2.7529 | 21.1 | 16500 | 2.6909 |
| 2.7422 | 21.74 | 17000 | 2.6966 |
| 2.738 | 22.38 | 17500 | 2.7034 |
| 2.7303 | 23.02 | 18000 | 2.6935 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
5168ab0cf48767170f6f47327d93ba41
|
repro-rights-amicus-briefs/bert-base-uncased-2-finetuned-RRamicus
|
repro-rights-amicus-briefs
|
bert
| 13 | 7 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,634 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-2-finetuned-RRamicus
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 928
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.0341 | 1.0 | 1113 | 1.7515 |
| 1.7881 | 2.0 | 2226 | 1.6616 |
| 1.697 | 3.0 | 3339 | 1.6061 |
| 1.6328 | 4.0 | 4452 | 1.5662 |
| 1.5919 | 5.0 | 5565 | 1.5362 |
| 1.5602 | 6.0 | 6678 | 1.5193 |
| 1.5221 | 7.0 | 7791 | 1.4984 |
| 1.5135 | 8.0 | 8904 | 1.4898 |
| 1.4917 | 9.0 | 10017 | 1.4755 |
| 1.4859 | 10.0 | 11130 | 1.4671 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
68b313b998f212f518c67379b4022e6f
|
sid321axn/my_sanskrit_model
|
sid321axn
|
t5
| 11 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['itihasa']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,544 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_sanskrit_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the itihasa dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5101
- Bleu: 0.2607
- Gen Len: 18.9973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 3.9557 | 1.0 | 4698 | 3.7191 | 0.3291 | 18.9973 |
| 3.8243 | 2.0 | 9396 | 3.6068 | 0.2728 | 18.9973 |
| 3.7562 | 3.0 | 14094 | 3.5503 | 0.2911 | 18.9973 |
| 3.7306 | 4.0 | 18792 | 3.5207 | 0.2404 | 18.9973 |
| 3.7003 | 5.0 | 23490 | 3.5101 | 0.2607 | 18.9973 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
9efaa721bc6bd2eaa320baf180779489
|
muchad/idt5-base
|
muchad
|
t5
| 7 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
|
['id', 'en', 'multilingual']
| null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['idt5']
| false | true | true | 424 | false |
# Indonesian Version of Multilingual T5 Transformer
Smaller version of the [Google's Multilingual T5-base](https://huggingface.co/google/mt5-base) model with only Indonesian and some English embeddings.
This model has to be fine-tuned before it is useable on a downstream task.\
Fine-tuned idT5 for the Question Generation and Question Answering tasks, available at [idT5-qa-qg](https://huggingface.co/muchad/idt5-qa-qg).
Paper: [idT5: Indonesian Version of Multilingual T5 Transformer](https://arxiv.org/abs/2302.00856)
Authors: *Mukhlish Fuadi, Adhi Dharma Wibawa, Surya Sumpeno*
## Citation
```
@misc{https://doi.org/10.48550/arxiv.2302.00856,
doi = {10.48550/ARXIV.2302.00856},
url = {https://arxiv.org/abs/2302.00856},
author = {Fuadi, Mukhlish and Wibawa, Adhi Dharma and Sumpeno, Surya},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7},
title = {idT5: Indonesian Version of Multilingual T5 Transformer},
publisher = {arXiv},
year = {2023}
}
```
## Abstract
Indonesian language is spoken by almost 200 million people and is the 10th most spoken language in the world, but it is under-represented in NLP (Natural Language Processing) research. A sparsity of language resources has hampered previous work on Indonesian. The Transformer is a new architecture rapidly becoming dominant for NLP, surpassing alternatives like convolutional and recurrent neural networks. T5 (Text-to-Text Transfer Transformer) is a Transformer model that converts all text-based language problems to text-to-text format for English. The multilingual variant is mT5 (multilingual T5) which has shown promising results on many NLP tasks across languages. However, the size of this multilingual model is a drawback for its application in real production applications, which sometimes require only one language. In this study, the mT5 model was adapted for only one language, Indonesian, resulting in a pre-trained T5 model that was specific only for Indonesian with a smaller size. For performance comparison, we fine-tuned this model and the mT5 model to the Sentiment Analysis (SA), Question Generation (QG), and Question Answering (QA) tasks with the exact mechanism and dataset. Fine-tuned model based on our model achieved 77.18% accuracy on SA, 8% higher than the mT5-based model, and obtained nearly the same score as the mT5-based model on QG and QA. The results confirm that it is possible to produce a smaller pre-trained model that maintains comparable yields while reducing the model size by up to 58%. In addition, the resulting model requires less memory, loads faster, and inference times faster.
|
ee13c1d3b1cced3fd584250c1b5b9a51
|
ROBERTaCoder/wav2vec2-base-timit-demo-google-colab
|
ROBERTaCoder
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,998 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5452
- Wer: 0.3296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5557 | 1.0 | 500 | 1.9362 | 1.0072 |
| 0.867 | 2.01 | 1000 | 0.5197 | 0.5173 |
| 0.4281 | 3.01 | 1500 | 0.4609 | 0.4552 |
| 0.3002 | 4.02 | 2000 | 0.4066 | 0.4129 |
| 0.2252 | 5.02 | 2500 | 0.4122 | 0.3952 |
| 0.1857 | 6.02 | 3000 | 0.4650 | 0.3990 |
| 0.1541 | 7.03 | 3500 | 0.4784 | 0.3834 |
| 0.1372 | 8.03 | 4000 | 0.3875 | 0.3805 |
| 0.1213 | 9.04 | 4500 | 0.5606 | 0.4002 |
| 0.1043 | 10.04 | 5000 | 0.4713 | 0.3762 |
| 0.0972 | 11.04 | 5500 | 0.4770 | 0.3692 |
| 0.0876 | 12.05 | 6000 | 0.4755 | 0.3671 |
| 0.0812 | 13.05 | 6500 | 0.4854 | 0.3616 |
| 0.0705 | 14.06 | 7000 | 0.4380 | 0.3659 |
| 0.0759 | 15.06 | 7500 | 0.5025 | 0.3516 |
| 0.0709 | 16.06 | 8000 | 0.5310 | 0.3577 |
| 0.0572 | 17.07 | 8500 | 0.5097 | 0.3561 |
| 0.0572 | 18.07 | 9000 | 0.5150 | 0.3510 |
| 0.0482 | 19.08 | 9500 | 0.4954 | 0.3488 |
| 0.0703 | 20.08 | 10000 | 0.5279 | 0.3512 |
| 0.0457 | 21.08 | 10500 | 0.5336 | 0.3459 |
| 0.036 | 22.09 | 11000 | 0.5471 | 0.3440 |
| 0.0368 | 23.09 | 11500 | 0.5109 | 0.3417 |
| 0.0342 | 24.1 | 12000 | 0.5506 | 0.3415 |
| 0.0318 | 25.1 | 12500 | 0.5291 | 0.3357 |
| 0.03 | 26.1 | 13000 | 0.5347 | 0.3363 |
| 0.026 | 27.11 | 13500 | 0.5475 | 0.3318 |
| 0.0232 | 28.11 | 14000 | 0.5628 | 0.3332 |
| 0.0246 | 29.12 | 14500 | 0.5452 | 0.3296 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
8a88f99103c4418b94c82135c8278415
|
sd-concepts-library/a-hat-kid
|
sd-concepts-library
| null | 9 | 0 | null | 1 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,040 | false |
### A Hat kid on Stable Diffusion
This is the `<hatintime-kid>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
a20e6d812ffbc0abf0bb1e2cd912174e
|
bitextor/bicleaner-ai-full-en-hbs
|
bitextor
|
xlm-roberta
| 12 | 2 |
transformers
| 0 | null | false | true | false |
gpl-3.0
|
['en', 'hbs', 'multilingual']
| null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['bicleaner-ai']
| false | true | true | 430 | false |
# Bicleaner AI full model for en-hbs
Bicleaner AI is a tool that aims at detecting noisy sentence pairs in a parallel corpus. It
indicates the likelihood of a pair of sentences being mutual translations (with a value near to 1) or not (with a value near to 0).
Sentence pairs considered very noisy are scored with 0.
Find out at our repository for further instructions on how to use it: https://github.com/bitextor/bicleaner-ai
|
4d7b40deb79ed7a0aef052e6274b91a6
|
Padomin/t5-base-TEDxJP-5front-1body-5rear
|
Padomin
|
t5
| 20 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-sa-4.0
| null |
['te_dx_jp']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,953 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-TEDxJP-5front-1body-5rear
This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4383
- Wer: 0.1697
- Mer: 0.1641
- Wil: 0.2500
- Wip: 0.7500
- Hits: 55852
- Substitutions: 6314
- Deletions: 2421
- Insertions: 2228
- Cer: 0.1328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:|
| 0.6185 | 1.0 | 1457 | 0.4683 | 0.1948 | 0.1863 | 0.2758 | 0.7242 | 54959 | 6658 | 2970 | 2956 | 0.1682 |
| 0.5149 | 2.0 | 2914 | 0.4280 | 0.1773 | 0.1713 | 0.2591 | 0.7409 | 55376 | 6468 | 2743 | 2238 | 0.1426 |
| 0.4705 | 3.0 | 4371 | 0.4173 | 0.1743 | 0.1682 | 0.2552 | 0.7448 | 55680 | 6418 | 2489 | 2351 | 0.1387 |
| 0.4023 | 4.0 | 5828 | 0.4114 | 0.1713 | 0.1656 | 0.2515 | 0.7485 | 55751 | 6313 | 2523 | 2230 | 0.1335 |
| 0.3497 | 5.0 | 7285 | 0.4162 | 0.1722 | 0.1662 | 0.2522 | 0.7478 | 55787 | 6331 | 2469 | 2323 | 0.1365 |
| 0.3246 | 6.0 | 8742 | 0.4211 | 0.1714 | 0.1655 | 0.2513 | 0.7487 | 55802 | 6310 | 2475 | 2284 | 0.1367 |
| 0.3492 | 7.0 | 10199 | 0.4282 | 0.1711 | 0.1652 | 0.2514 | 0.7486 | 55861 | 6350 | 2376 | 2325 | 0.1341 |
| 0.2788 | 8.0 | 11656 | 0.4322 | 0.1698 | 0.1641 | 0.2502 | 0.7498 | 55883 | 6342 | 2362 | 2265 | 0.1327 |
| 0.2801 | 9.0 | 13113 | 0.4362 | 0.1710 | 0.1652 | 0.2514 | 0.7486 | 55828 | 6351 | 2408 | 2288 | 0.1352 |
| 0.2773 | 10.0 | 14570 | 0.4383 | 0.1697 | 0.1641 | 0.2500 | 0.7500 | 55852 | 6314 | 2421 | 2228 | 0.1328 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
0216a34cb0cd3683e69caa907c84ca22
|
jonatasgrosman/exp_w2v2t_zh-cn_no-pretraining_s930
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['zh-CN']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'zh-CN']
| false | true | true | 420 | false |
# exp_w2v2t_zh-cn_no-pretraining_s930
Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
d26a47c02f55fb362e6bfd9b3ad4c36a
|
Lvxue/distilled-mt5-small-b1.25
|
Lvxue
|
mt5
| 17 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
|
['en', 'ro']
|
['wmt16']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,036 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b1.25
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7945
- Bleu: 7.5563
- Gen Len: 44.1141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
85cf6badd310c0f6d36bba551fb4ca85
|
lewington/MJv4-hallucinations
|
lewington
| null | 5 | 0 | null | 0 | null | false | false | false |
openrail
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,451 | false |
# MJv4 Hallucinations
These are 3 models trained on a small (<2000) dataset of Midjourney v4 images with no particular style. <b> These models are nowhere near as good as Midjourney v4 </b>, and they all suffer from a lot of "language drift" but they do have an interesting style. They are the best of something like 60 different models I trained as part of a set of experiments aimed at replicating Midjourney v4's style with only a few, uncaptioned images.
The models are:
- <b>mjg-4000-model.ckpt</b>: trained on 250 MJv4 images with no regularization for 4000 steps, prompt: "mjg style"
- <b>mjg-12000-model.ckpt</b>: trained on 250 MJv4 images with no regularization for 12000 steps, prompt: "mjg style"
- <b>mjv-1200-model.ckpt</b>: trained on 7 MJv4 images with 1000 regularization images for 1200 steps, prompt: "mjv style"
Models you can download are <b>bolded</b>
<img src="https://github.com/Lewington-pitsos/mj4-hallucinations/blob/main/compare.png?raw=true" width="100%"/>
In my subjective opinion, only <b>mjv-1200-model.ckpt<\b> is actually worth downloading.
## Credits:
- [NitroSock](https://github.com/nitrosocke/dreambooth-training-guide) for the regularization images
- [prompthero](https://huggingface.co/prompthero/openjourney) whose idea I copied
## Take Down
As far as I can tell, uploading these models does not cause any person or corporate entity any harm, but if you think I am wrong about this please reach out.
|
7d37af53ec8f498dff55924bc10c5d08
|
muhtasham/tiny-mlm-snli
|
muhtasham
|
bert
| 10 | 13 |
transformers
| 1 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,536 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-snli-plain_text
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.665 | 0.4 | 500 | 3.2495 |
| 3.4103 | 0.8 | 1000 | nan |
| 3.2635 | 1.2 | 1500 | 3.1518 |
| 3.1738 | 1.6 | 2000 | 3.1555 |
| 3.0556 | 2.0 | 2500 | 3.0593 |
| 2.9933 | 2.4 | 3000 | 3.0970 |
| 2.9019 | 2.8 | 3500 | 3.0773 |
| 2.876 | 3.2 | 4000 | 3.1233 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
e35fe6ecf82041ab35a0dcd52f26293e
|
ml6team/mt5-small-german-query-generation
|
ml6team
|
mt5
| 8 | 1,653 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
|
['de']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'query-generation']
| false | true | true | 935 | false |
# mt5-small-german-query-generation
## Model description:
This model was created with the purpose to generate possible queries for a german input article.
For this model, we finetuned a multilingual T5 model [mt5-small](https://huggingface.co/google/mt5-small) on the [MMARCO dataset](https://huggingface.co/datasets/unicamp-dl/mmarco) the machine translated version of the MS MARCO dataset.
The model was trained for 1 epoch, on 200,000 unique queries of the dataset. We trained the model on one K80 GPU for 25,000 iterations with following parameters:
- learning rate: 1e-3
- train batch size: 8
- max input sequence length: 512
- max target sequence length: 64
## Model Performance:
Model evaluation was done on 2000 evaluation paragraphs of the dataset. Mean [f1 ROUGE scores](https://github.com/pltrdy/rouge) were calculated for the model.
| Rouge-1 | Rouge-2 | Rouge-L |
|---|---|---|
|0.162 | 0.052 | 0.161 |
|
8856c3e93d0b63fbab8d1492f2a07ed1
|
gokuls/distilbert_add_GLUE_Experiment_logit_kd_rte_192
|
gokuls
|
distilbert
| 17 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,688 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_logit_kd_rte_192
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4235
- Accuracy: 0.4729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4313 | 1.0 | 10 | 0.4259 | 0.4729 |
| 0.4183 | 2.0 | 20 | 0.4235 | 0.4729 |
| 0.4175 | 3.0 | 30 | 0.4239 | 0.4729 |
| 0.4169 | 4.0 | 40 | 0.4240 | 0.4729 |
| 0.4183 | 5.0 | 50 | 0.4245 | 0.4729 |
| 0.417 | 6.0 | 60 | 0.4237 | 0.4729 |
| 0.4174 | 7.0 | 70 | 0.4235 | 0.4729 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
805f50bbb85217fcd17e1d75ad067c4d
|
MGanesh29/distilbert-base-uncased-finetuned-cola
|
MGanesh29
|
distilbert
| 18 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,793 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1195
- Matthews Correlation: 0.6749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 8 | 1.6008 | 0.5863 |
| No log | 2.0 | 16 | 1.5039 | 0.4583 |
| No log | 3.0 | 24 | 1.3972 | 0.6021 |
| No log | 4.0 | 32 | 1.2925 | 0.6038 |
| No log | 5.0 | 40 | 1.2222 | 0.6333 |
| No log | 6.0 | 48 | 1.1626 | 0.6333 |
| No log | 7.0 | 56 | 1.1195 | 0.6749 |
| No log | 8.0 | 64 | 1.1048 | 0.6749 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
c44470a28403e62ac106cc4ce51ef56f
|
itisphilippe/StackOverflowNER
|
itisphilippe
| null | 71 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 770 | false |
Models and other data for https://github.com/jeniyat/StackOverflowNER. Use `git lfs fetch --all` to download all files.
Please note that folders are stored decompressed due to HuggingFace file size limitations.
The individual files in ./data_ctc/ are compressed using `gzip`, and can be decompressed using `gunzip -d *.gz`.
Intermediate model checkpoints have not been uploaded due to bandwidth limitations.
**BibTeX entry and citation info**
```bibtex
@inproceedings{Tabassum20acl,
title = {Code and Named Entity Recognition in StackOverflow},
author = "Tabassum, Jeniya and Maddela, Mounica and Xu, Wei and Ritter, Alan",
booktitle = {The Annual Meeting of the Association for Computational Linguistics (ACL)},
year = {2020}
}
```
|
f814e3e9148a36e301e360e4f46a0e28
|
Miranda/t5-small-train
|
Miranda
|
t5
| 43 | 3 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 1,975 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-train
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2367
- Rouge1: 43.9525
- Rouge2: 22.3403
- Rougel: 38.7683
- Rougelsum: 39.2056
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.6e-05
- train_batch_size: 9
- eval_batch_size: 9
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 3.3237 | 1.0 | 40 | 2.6713 | 34.4731 | 14.9731 | 29.4814 | 29.9747 |
| 2.7401 | 2.0 | 80 | 2.4318 | 38.1153 | 18.3492 | 33.4476 | 33.9181 |
| 2.5882 | 3.0 | 120 | 2.3339 | 41.2707 | 19.8571 | 36.2685 | 36.6119 |
| 2.4264 | 4.0 | 160 | 2.2878 | 42.184 | 20.9666 | 37.3488 | 37.6172 |
| 2.3915 | 5.0 | 200 | 2.2605 | 43.4928 | 21.7195 | 38.4917 | 38.8471 |
| 2.3599 | 6.0 | 240 | 2.2462 | 44.2876 | 22.28 | 38.9234 | 39.3673 |
| 2.3073 | 7.0 | 280 | 2.2398 | 43.9822 | 22.3746 | 38.7625 | 39.0964 |
| 2.3026 | 8.0 | 320 | 2.2367 | 43.9525 | 22.3403 | 38.7683 | 39.2056 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
5b342274535dca68e600edf0ec06da27
|
YaHi/bert-base-uncased-finetuned-effectiveFeedback-Classification-kaggleEffectiveFeedback2
|
YaHi
|
bert
| 14 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,383 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-effectiveFeedback-Classification-kaggleEffectiveFeedback2
This model is a fine-tuned version of [YaHi/bert-base-uncased-finetuned-effectiveFeedback](https://huggingface.co/YaHi/bert-base-uncased-finetuned-effectiveFeedback) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7506 | 1.0 | 3677 | 0.7284 |
| 0.623 | 2.0 | 7354 | 0.7558 |
| 0.4225 | 3.0 | 11031 | 0.9724 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
4799a35a0fcbd14fd5ece873b1503beb
|
MultiBertGunjanPatrick/multiberts-seed-3-500k
|
MultiBertGunjanPatrick
|
bert
| 7 | 4 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['bookcorpus', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['exbert', 'multiberts', 'multiberts-seed-3']
| false | true | true | 6,483 | false |
# MultiBERTs Seed 3 Checkpoint 500k (uncased)
Seed 3 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-500k')
model = BertModel.from_pretrained("multiberts-seed-3-500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
0611b7a591aaaedc1b5601e9b5612f59
|
kdo6301/bert-base-uncased-finetuned-cola-2
|
kdo6301
|
bert
| 13 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,556 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9235
- Matthews Correlation: 0.6016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4906 | 1.0 | 535 | 0.5046 | 0.5080 |
| 0.2901 | 2.0 | 1070 | 0.5881 | 0.5235 |
| 0.1818 | 3.0 | 1605 | 0.7253 | 0.5584 |
| 0.1177 | 4.0 | 2140 | 0.8316 | 0.5927 |
| 0.0826 | 5.0 | 2675 | 0.9235 | 0.6016 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ad734268e4a54f11feb56e02d4991f8c
|
theojolliffe/distilbart-cnn-arxiv-pubmed-pubmed-v3-e16
|
theojolliffe
|
bart
| 19 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,080 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-arxiv-pubmed-pubmed-v3-e16
This model is a fine-tuned version of [theojolliffe/distilbart-cnn-arxiv-pubmed-pubmed](https://huggingface.co/theojolliffe/distilbart-cnn-arxiv-pubmed-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8306
- Rouge1: 56.4519
- Rouge2: 41.6818
- Rougel: 44.7833
- Rougelsum: 54.6359
- Gen Len: 141.9815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 1.1157 | 50.9487 | 31.3005 | 34.0145 | 48.6057 | 141.8519 |
| 1.3569 | 2.0 | 796 | 0.9688 | 53.0653 | 34.1855 | 37.0759 | 50.5942 | 141.2963 |
| 0.8704 | 3.0 | 1194 | 0.9053 | 53.9684 | 36.0388 | 38.6674 | 51.9604 | 142.0 |
| 0.6287 | 4.0 | 1592 | 0.8515 | 54.2379 | 36.4915 | 39.1393 | 51.6991 | 141.4074 |
| 0.6287 | 5.0 | 1990 | 0.8274 | 53.6806 | 34.8373 | 37.7369 | 51.239 | 141.6481 |
| 0.465 | 6.0 | 2388 | 0.8486 | 55.2534 | 39.1757 | 41.6366 | 53.2989 | 141.9259 |
| 0.3432 | 7.0 | 2786 | 0.8116 | 54.539 | 37.6314 | 40.5531 | 52.1997 | 141.3889 |
| 0.2577 | 8.0 | 3184 | 0.7976 | 54.8212 | 36.8347 | 40.6768 | 52.7785 | 142.0 |
| 0.204 | 9.0 | 3582 | 0.8010 | 53.9302 | 37.3523 | 40.135 | 52.139 | 141.7778 |
| 0.204 | 10.0 | 3980 | 0.8168 | 54.3151 | 38.0665 | 42.4112 | 52.4682 | 142.0 |
| 0.1663 | 11.0 | 4378 | 0.8171 | 54.7027 | 38.3117 | 42.0196 | 52.8821 | 142.0 |
| 0.135 | 12.0 | 4776 | 0.8202 | 54.1035 | 37.9154 | 40.7676 | 52.2509 | 142.0 |
| 0.1102 | 13.0 | 5174 | 0.8204 | 56.223 | 41.0947 | 44.0131 | 54.3353 | 142.0 |
| 0.0928 | 14.0 | 5572 | 0.8280 | 56.1637 | 41.0408 | 44.2931 | 54.5488 | 142.0 |
| 0.0928 | 15.0 | 5970 | 0.8273 | 56.2608 | 41.3855 | 44.4432 | 54.5778 | 142.0 |
| 0.0847 | 16.0 | 6368 | 0.8306 | 56.4519 | 41.6818 | 44.7833 | 54.6359 | 141.9815 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.0
- Tokenizers 0.12.1
|
4340dee8f0bf172df4f3160f79aee46e
|
shibing624/bert4ner-base-chinese
|
shibing624
|
bert
| 9 | 150 |
transformers
| 2 |
token-classification
| true | false | false |
apache-2.0
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['bert', 'pytorch', 'zh', 'ner']
| false | true | true | 3,759 | false |
# BERT for Chinese Named Entity Recognition(bert4ner) Model
中文实体识别模型
`bert4ner-base-chinese` evaluate PEOPLE(人民日报) test data:
The overall performance of BERT on people **test**:
| | Accuracy | Recall | F1 |
| ------------ | ------------------ | ------------------ | ------------------ |
| BertSoftmax | 0.9425 | 0.9627 | 0.9525 |
在PEOPLE的测试集上达到接近SOTA水平。
BertSoftmax的网络结构(原生BERT):

## Usage
本项目开源在实体识别项目:[nerpy](https://github.com/shibing624/nerpy),可支持bert4ner模型,通过如下命令调用:
```shell
>>> from nerpy import NERModel
>>> model = NERModel("bert", "shibing624/bert4ner-base-chinese")
>>> predictions, raw_outputs, entities = model.predict(["常建良,男,1963年出生,工科学士,高级工程师"], split_on_space=False)
entities: [('常建良', 'PER'), ('1963年', 'TIME')]
```
模型文件组成:
```
bert4ner-base-chinese
├── config.json
├── model_args.json
├── pytorch_model.bin
├── special_tokens_map.json
├── tokenizer_config.json
└── vocab.txt
```
## Usage (HuggingFace Transformers)
Without [nerpy](https://github.com/shibing624/nerpy), you can use the model like this:
First, you pass your input through the transformer model, then you have to apply the bio tag to get the entity words.
Install package:
```
pip install transformers seqeval
```
```python
import os
import torch
from transformers import AutoTokenizer, AutoModelForTokenClassification
from seqeval.metrics.sequence_labeling import get_entities
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("shibing624/bert4ner-base-chinese")
model = AutoModelForTokenClassification.from_pretrained("shibing624/bert4ner-base-chinese")
label_list = ['I-ORG', 'B-LOC', 'O', 'B-ORG', 'I-LOC', 'I-PER', 'B-TIME', 'I-TIME', 'B-PER']
sentence = "王宏伟来自北京,是个警察,喜欢去王府井游玩儿。"
def get_entity(sentence):
tokens = tokenizer.tokenize(sentence)
inputs = tokenizer.encode(sentence, return_tensors="pt")
with torch.no_grad():
outputs = model(inputs).logits
predictions = torch.argmax(outputs, dim=2)
char_tags = [(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].numpy())][1:-1]
print(sentence)
print(char_tags)
pred_labels = [i[1] for i in char_tags]
entities = []
line_entities = get_entities(pred_labels)
for i in line_entities:
word = sentence[i[1]: i[2] + 1]
entity_type = i[0]
entities.append((word, entity_type))
print("Sentence entity:")
print(entities)
get_entity(sentence)
```
output:
```shell
王宏伟来自北京,是个警察,喜欢去王府井游玩儿。
[('王', 'B-PER'), ('宏', 'I-PER'), ('伟', 'I-PER'), ('来', 'O'), ('自', 'O'), ('北', 'B-LOC'), ('京', 'I-LOC'), (',', 'O'), ('是', 'O'), ('个', 'O'), ('警', 'O'), ('察', 'O'), (',', 'O'), ('喜', 'O'), ('欢', 'O'), ('去', 'O'), ('王', 'B-LOC'), ('府', 'I-LOC'), ('井', 'I-LOC'), ('游', 'O'), ('玩', 'O'), ('儿', 'O'), ('。', 'O')]
Sentence entity:
[('王宏伟', 'PER'), ('北京', 'LOC'), ('王府井', 'LOC')]
```
### 训练数据集
#### 中文实体识别数据集
| 数据集 | 语料 | 下载链接 | 文件大小 |
| :------- | :--------- | :---------: | :---------: |
| **`CNER中文实体识别数据集`** | CNER(12万字) | [CNER github](https://github.com/shibing624/nerpy/tree/main/examples/data/cner)| 1.1MB |
| **`PEOPLE中文实体识别数据集`** | 人民日报数据集(200万字) | [PEOPLE github](https://github.com/shibing624/nerpy/tree/main/examples/data/people)| 12.8MB |
CNER中文实体识别数据集,数据格式:
```text
美 B-LOC
国 I-LOC
的 O
华 B-PER
莱 I-PER
士 I-PER
我 O
跟 O
他 O
```
如果需要训练bert4ner,请参考[https://github.com/shibing624/nerpy/tree/main/examples](https://github.com/shibing624/nerpy/tree/main/examples)
## Citation
```latex
@software{nerpy,
author = {Xu Ming},
title = {nerpy: Named Entity Recognition toolkit},
year = {2022},
url = {https://github.com/shibing624/nerpy},
}
```
|
b9016b107854390d49f41923296d34a1
|
jacquesle/bert-base-cased-NER-favsbot
|
jacquesle
|
bert
| 19 | 7 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['favsbot']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,086 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-NER-favsbot
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the favsbot dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0992
- Precision: 0.8571
- Recall: 0.96
- F1: 0.9057
- Accuracy: 0.9583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 1.7643 | 0.0 | 0.0 | 0.0 | 0.5694 |
| No log | 2.0 | 20 | 1.1420 | 0.0 | 0.0 | 0.0 | 0.5833 |
| No log | 3.0 | 30 | 0.7946 | 0.9375 | 0.6 | 0.7317 | 0.8056 |
| No log | 4.0 | 40 | 0.5625 | 0.8182 | 0.72 | 0.7660 | 0.8611 |
| No log | 5.0 | 50 | 0.4217 | 0.8148 | 0.88 | 0.8462 | 0.9306 |
| No log | 6.0 | 60 | 0.3082 | 0.8519 | 0.92 | 0.8846 | 0.9444 |
| No log | 7.0 | 70 | 0.2386 | 0.8148 | 0.88 | 0.8462 | 0.9444 |
| No log | 8.0 | 80 | 0.1965 | 0.8148 | 0.88 | 0.8462 | 0.9444 |
| No log | 9.0 | 90 | 0.1626 | 0.8148 | 0.88 | 0.8462 | 0.9444 |
| No log | 10.0 | 100 | 0.1465 | 0.8571 | 0.96 | 0.9057 | 0.9583 |
| No log | 11.0 | 110 | 0.1314 | 0.8571 | 0.96 | 0.9057 | 0.9583 |
| No log | 12.0 | 120 | 0.1215 | 0.8571 | 0.96 | 0.9057 | 0.9583 |
| No log | 13.0 | 130 | 0.1160 | 0.8571 | 0.96 | 0.9057 | 0.9583 |
| No log | 14.0 | 140 | 0.1104 | 0.8571 | 0.96 | 0.9057 | 0.9583 |
| No log | 15.0 | 150 | 0.1050 | 0.8571 | 0.96 | 0.9057 | 0.9583 |
| No log | 16.0 | 160 | 0.1012 | 0.8571 | 0.96 | 0.9057 | 0.9583 |
| No log | 17.0 | 170 | 0.0997 | 0.8571 | 0.96 | 0.9057 | 0.9583 |
| No log | 18.0 | 180 | 0.0997 | 0.8571 | 0.96 | 0.9057 | 0.9583 |
| No log | 19.0 | 190 | 0.0995 | 0.8571 | 0.96 | 0.9057 | 0.9583 |
| No log | 20.0 | 200 | 0.0992 | 0.8571 | 0.96 | 0.9057 | 0.9583 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.12.1
|
b39253f353b93fbe0fd684434e551d8c
|
YurtsAI/yurts-python-code-gen-30-sparse
|
YurtsAI
|
codegen
| 10 | 32,961 |
transformers
| 12 |
text-generation
| true | false | false |
bsd-3-clause
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,240 | false |
# Maverick (Yurt's Python Code Generation Model)
## Model description
This code generation model was fine-tuned on Python code from a generic multi-language code generation model. This model was then pushed to 30% sparsity using Yurts' in-house technology without performance loss. In this specific instance, the class representation for the network is still dense. This particular model has 350M trainable parameters.
## Training data
This model was tuned on a subset of the Python data available in the BigQuery open-source [Github dataset](https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code).
## How to use
The model is great at autocompleting based off of partially generated function signatures and class signatures. It is also decent at generating code base based off of natural language prompts with a comment. If you find something cool you can do with the model, be sure to share it with us!
Check out our [colab notebook](https://colab.research.google.com/drive/1NDO4X418HuPJzF8mFc6_ySknQlGIZMDU?usp=sharing) to see how to invoke the model and try it out.
## Feedback and Questions
Have any questions or feedback? Find us on [Discord](https://discord.gg/2x4rmSGER9).
|
4a08d8f597741fd75a6432e22113d9c1
|
google/t5-efficient-base-el8
|
google
|
t5
| 12 | 26 |
transformers
| 1 |
text2text-generation
| true | true | true |
apache-2.0
|
['en']
|
['c4']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['deep-narrow']
| false | true | true | 6,248 | false |
# T5-Efficient-BASE-EL8 (Deep-Narrow version)
T5-Efficient-BASE-EL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-base-el8** - is of model type **Base** with the following variations:
- **el** is **8**
It has **194.61** million parameters and thus requires *ca.* **778.44 MB** of memory in full precision (*fp32*)
or **389.22 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
ff9e146a9870c77c162b635308ad9d66
|
kejian/curious-awr
|
kejian
|
gpt2
| 23 | 0 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['kejian/codeparrot-train-more-filter-3.3b-cleaned']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,722 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# curious-awr
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 12589
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True,
'skip_tokens': 1649934336},
'generation': {'batch_size': 128,
'every_n_steps': 256,
'force_call_on': [12588],
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_hits_threshold': 0,
'num_samples': 2048},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_hits_threshold': 0,
'num_samples': 2048,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'every_n_steps': 256,
'force_call_on': [12588],
'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '9b71edc6c769705c1ef1955b6f5cfdd5a7d1b802',
'value_head_config': {'is_detached': False}},
'path_or_name': 'kejian/spectacular-awr'},
'objective': {'alpha': 0.05, 'beta': 1, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'curious-awr',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 12588,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649934336,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/3mpa7db7
|
5d735a9ccbd9e3768f787cd662b8792d
|
mahaamami/distilroberta-base-finetuned-wikitext2
|
mahaamami
|
roberta
| 15 | 4 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,273 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1026 | 1.0 | 5835 | 1.9705 |
| 2.0088 | 2.0 | 11670 | 1.9090 |
| 1.9766 | 3.0 | 17505 | 1.8833 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
761c47cda751cf5f144d38c90fac61fe
|
asi/igpt-fr-cased-base
|
asi
|
gpt2
| 8 | 6 |
transformers
| 4 |
text-generation
| true | true | false |
apache-2.0
|
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['tf', 'pytorch', 'gpt2', 'text-to-image']
| false | true | true | 5,200 | false |
<img src="https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/igpt-logo.png" width="400">
## Model description
**iGPT-fr** 🇫🇷 is a GPT model for French pre-trained incremental language model developped by the [Laboratoire de Linguistique Formelle (LLF)](http://www.llf.cnrs.fr/en). We adapted [GPT-fr 🇫🇷](https://huggingface.co/asi/gpt-fr-cased-base) model to generate images conditionned by text inputs.
## Intended uses & limitations
The model can be leveraged for image generation tasks. The model is currently under a developpment phase.
#### How to use
The model might be used through the 🤗 `Transformers` librairie. You will also need to install the `Taming Transformers` library for high-resolution image synthesis:
```bash
pip install git+https://github.com/CompVis/taming-transformers.git
```
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
from huggingface_hub import hf_hub_download
from omegaconf import OmegaConf
from taming.models import vqgan
import torch
from PIL import Image
import numpy as np
# Load VQGAN model
vqgan_ckpt = hf_hub_download(repo_id="boris/vqgan_f16_16384", filename="model.ckpt", force_download=False)
vqgan_config = hf_hub_download(repo_id="boris/vqgan_f16_16384", filename="config.yaml", force_download=False)
config = OmegaConf.load(vqgan_config)
vqgan_model = vqgan.VQModel(**config.model.params)
vqgan_model.eval().requires_grad_(False)
vqgan_model.init_from_ckpt(vqgan_ckpt)
# Load pretrained model
model = GPT2LMHeadModel.from_pretrained("asi/igpt-fr-cased-base")
model.eval()
tokenizer = GPT2Tokenizer.from_pretrained("asi/igpt-fr-cased-base")
# Generate a sample of text
input_sentence = "Une carte de l'europe"
input_ids = tokenizer.encode(input_sentence, return_tensors='pt')
input_ids = torch.cat((input_ids, torch.tensor([[50000]])), 1) # Add image generation token
greedy_output = model.generate(
input_ids.to(device),
max_length=256+input_ids.shape[1],
do_sample=True,
top_p=0.92,
top_k=0)
def custom_to_pil(x):
x = x.detach().cpu()
x = torch.clamp(x, -1., 1.)
x = (x + 1.)/2.
x = x.permute(1,2,0).numpy()
x = (255*x).astype(np.uint8)
x = Image.fromarray(x)
if not x.mode == "RGB":
x = x.convert("RGB")
return x
z_idx = greedy_output[0, input_ids.shape[1]:] - 50001
z_quant = vqgan_model.quantize.get_codebook_entry(z_idx, shape=(1, 16, 16, 256))
x_rec = vqgan_model.decode(z_quant).to('cpu')[0]
display(custom_to_pil(x_rec))
```
You may also filter results based on CLIP:
```python
from tqdm import tqdm
def hallucinate(prompt, num_images=64):
input_ids = tokenizer.encode(prompt, return_tensors='pt')
input_ids = torch.cat((input_ids, torch.tensor([[50000]])), 1).to(device) # Add image generation token
all_images = []
for i in tqdm(range(num_images)):
greedy_output = model.generate(
input_ids.to(device),
max_length=256+input_ids.shape[1],
do_sample=True,
top_p=0.92,
top_k=0)
z_idx = greedy_output[0, input_ids.shape[1]:] - 50001
z_quant = vqgan_model.quantize.get_codebook_entry(z_idx, shape=(1, 16, 16, 256))
x_rec = vqgan_model.decode(z_quant).to('cpu')[0]
all_images.append(custom_to_pil(x_rec))
return all_images
input_sentence = "Une carte de l'europe"
all_images = hallucinate(input_sentence)
from transformers import pipeline
opus_model = "Helsinki-NLP/opus-mt-fr-en"
opus_translator = pipeline("translation", model=opus_model)
opus_translator(input_sentence)
from transformers import CLIPProcessor, CLIPModel
clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
clip_processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
def clip_top_k(prompt, images, k=8):
prompt_fr = opus_translator(input_sentence)[0]['translation_text']
inputs = clip_processor(text=prompt_fr, images=images, return_tensors="pt", padding=True)
outputs = clip_model(**inputs)
logits = outputs.logits_per_text # this is the image-text similarity score
scores = np.array(logits[0].detach()).argsort()[-k:][::-1]
return [images[score] for score in scores]
filtered_images = clip_top_k(input_sentence, all_images)
for fi in filtered_images:
display(fi)
```
## Training data
We created a dedicated corpus to train our generative model. The training corpus consists in text-image pairs. We aggregated portions from existing corpora: [Laion-5B](https://laion.ai/blog/laion-5b/) and [WIT](https://github.com/google-research-datasets/wit). The final dataset includes 10,807,534 samples.
## Training procedure
We pre-trained the model on the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/) supercomputer. We perform the training within a total of 140 hours of computation on Tesla V-100 hardware (TDP of 300W). The training was distributed on 8 compute nodes of 8 GPUs. We used data parallelization in order to divide each micro-batch on the computing units. We estimated the total emissions at 1161.22 kgCO2eq, using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al., (2019)](lacoste-2019).
|
2e15de9cc38c04963aef2b333dc775f5
|
johko/capdec_05
|
johko
| null | 3 | 0 | null | 0 |
image-to-text
| false | false | false |
apache-2.0
|
['en']
|
['MS-COCO', 'Flickr30k']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Image Captioning']
| false | true | true | 1,346 | false |
# CapDec - NoiseLevel: 0.05
## Model Description
These are model weights originally provided by the authors of the paper [Text-Only Training for Image Captioning using Noise-Injected CLIP](https://arxiv.org/pdf/2211.00575.pdf).
Their method aims to train CLIP with only text samples. Therefore they are injecting zero-mean Gaussian Noise into the text embeddings before decoding.
In their words:
*Specifically, we assume that the visual embedding corresponding to a text embedding
lies somewhere within a ball of small radius around the text embedding (see Fig. 1).
We would like all text embeddings in this ball to decode to the same caption,which should
also correspond to the visual content mapped to this ball. We implement this intuition by
adding zero-mean Gaussian noise of STD to the text embedding before decoding it.*
The "Noise Level" of 0.05 is equivalent to the Noise Variance which is the square of the STD.
The reported metrics are results of a model with a Noise Variance of 0.016, which the authors unfortunately do not provide in their repository.
## Datasets
The authors trained the model on MS-COCO and Flickr30k datasets.
## Performance
The authors don't explicitly report the performance for this NoiseLevel but it can be estimated from the following figure from the original paper:

|
4ac811cd0b7e3bf4455cc0f101abe178
|
agnesluhtaru/whisper-medium-et-ERR2020
|
agnesluhtaru
|
whisper
| 113 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'whisper-event']
| true | true | true | 2,190 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-et-ERR2020
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the following training sets: Common Voice 11, VoxPopuli, FLEURS and [ERR2020](http://bark.phon.ioc.ee/lw/korpused/ERR2020.html). The checkpoint-7000 was on [Whisper Event leaderboard](https://huggingface.co/spaces/whisper-event/winners?dataset=mozilla-foundation%2Fcommon_voice_11_0). Current score is for the final checkpoint.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.1828 | 0.1 | 1000 | 0.3547 | 20.8829 |
| 0.09 | 0.2 | 2000 | 0.3476 | 19.0096 |
| 0.083 | 0.3 | 3000 | 0.3386 | 18.1304 |
| 0.0765 | 0.4 | 4000 | 0.3365 | 17.2591 |
| 0.0592 | 0.5 | 5000 | 0.3534 | 19.0213 |
| 0.0672 | 0.6 | 6000 | 0.3622 | 18.4263 |
| 0.0629 | 0.7 | 7000 | 0.3487 | 15.9839 |
| 0.0546 | 1.03 | 8000 | 0.3677 | 16.1021 |
| 0.0459 | 1.13 | 9000 | 0.3704 | 17.9073 |
| 0.0425 | 1.23 | 10000 | 0.3672 | 15.9119 |
The validation set is combined from the validation sets of Common Voice 11, VoxPopuli, FLEURS and ERR2020.
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+rocm5.1.1h
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
ed4aff326e69c6f5efcfb7f4dd593fa6
|
zari/my-awesome-model
|
zari
|
gpt2
| 4 | 4 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
| null |
[]
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,227 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-awesome-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 91 | 3.4934 |
| No log | 2.0 | 182 | 3.4451 |
| No log | 3.0 | 273 | 3.4356 |
### Framework versions
- Transformers 4.7.0
- Pytorch 1.9.0+cu102
- Datasets 1.8.0
- Tokenizers 0.10.3
|
93c9db945ad52dc41471523e313f1fcd
|
MiguelCosta/finetuning-sentiment-model-3000-samples
|
MiguelCosta
|
distilbert
| 16 | 11 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,055 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5805
- Accuracy: 0.8767
- F1: 0.8810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fa7dd604f352b19f7cc5d7800cbc049f
|
autoevaluate/roberta-base-squad2
|
autoevaluate
|
roberta
| 11 | 12 |
transformers
| 0 |
question-answering
| true | true | true |
cc-by-4.0
|
['en']
|
['squad_v2']
| null | 7 | 7 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 6,411 | false |
# roberta-base for QA
> Note: this is a clone of [`roberta-base-squad2`](https://huggingface.co/deepset/roberta-base-squad2) for internal testing.
This is the [roberta-base](https://huggingface.co/roberta-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 96
n_epochs = 2
base_LM_model = "roberta-base"
max_seq_len = 386
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Using a distilled model instead
Please note that we have also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
```
For a complete example of ``roberta-base-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/roberta-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 79.87029394424324,
"f1": 82.91251169582613,
"total": 11873,
"HasAns_exact": 77.93522267206478,
"HasAns_f1": 84.02838248389763,
"HasAns_total": 5928,
"NoAns_exact": 81.79983179142137,
"NoAns_f1": 81.79983179142137,
"NoAns_total": 5945
```
Using the official [question answering notebook](https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb) from `transformers` yields:
```
{'HasAns_exact': 77.93522267206478,
'HasAns_f1': 83.93715663402219,
'HasAns_total': 5928,
'NoAns_exact': 81.90075693860386,
'NoAns_f1': 81.90075693860386,
'NoAns_total': 5945,
'best_exact': 79.92082877116145,
'best_exact_thresh': 0.0,
'best_f1': 82.91749890730902,
'best_f1_thresh': 0.0,
'exact': 79.92082877116145,
'f1': 82.91749890730917,
'total': 11873}
```
which is consistent with the officially reported results. Using the question answering `Evaluator` from `evaluate` gives:
```
{'HasAns_exact': 77.91835357624831,
'HasAns_f1': 84.07820736158186,
'HasAns_total': 5928,
'NoAns_exact': 81.91757779646763,
'NoAns_f1': 81.91757779646763,
'NoAns_total': 5945,
'best_exact': 79.92082877116145,
'best_exact_thresh': 0.996823787689209,
'best_f1': 82.99634576260925,
'best_f1_thresh': 0.996823787689209,
'exact': 79.92082877116145,
'f1': 82.9963457626089,
'latency_in_seconds': 0.016523243643392558,
'samples_per_second': 60.52080460605492,
'total': 11873,
'total_time_in_seconds': 196.18047177799986}
```
which is also consistent with the officially reported results.
## Authors
**Branden Chan:** branden.chan@deepset.ai
**Timo Möller:** timo.moeller@deepset.ai
**Malte Pietsch:** malte.pietsch@deepset.ai
**Tanay Soni:** tanay.soni@deepset.ai
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/haystack-logo-colored.svg" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/deepset-logo-colored.svg" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join"><img alt="slack" class="h-7 inline-block m-0" style="margin: 0" src="https://huggingface.co/spaces/deepset/README/resolve/main/Slack_RGB.png"/>community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
6d0ab537511f4a60aa7e67629cf2afbb
|
noflm/whisper-base-ja-elite
|
noflm
|
whisper
| 114 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
other
|
['ja']
|
['Elite35P-Server/EliteVoiceProject']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,915 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Japanese Elite
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Elite35P-Server/EliteVoiceProject twitter dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4385
- Wer: 17.0732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 200
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:-------:|
| 0.0002 | 111.0 | 1000 | 0.2155 | 9.7561 |
| 0.0001 | 222.0 | 2000 | 0.2448 | 12.1951 |
| 0.0 | 333.0 | 3000 | 0.2674 | 13.4146 |
| 0.0 | 444.0 | 4000 | 0.2943 | 15.8537 |
| 0.0 | 555.0 | 5000 | 0.3182 | 17.0732 |
| 0.0 | 666.0 | 6000 | 0.3501 | 18.9024 |
| 0.0 | 777.0 | 7000 | 0.3732 | 16.4634 |
| 0.0 | 888.0 | 8000 | 0.4025 | 17.0732 |
| 0.0 | 999.0 | 9000 | 0.4178 | 20.1220 |
| 0.0 | 1111.0 | 10000 | 0.4385 | 17.0732 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
ac9d7a7c99fbdd8955e86bfa9696825d
|
pszemraj/electra-base-discriminator-CoLA
|
pszemraj
|
electra
| 16 | 7 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,046 | false |
# electra-base-discriminator-CoLA
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3542
- Matthews Correlation: 0.6580
## Model description
Trying to find a decent optimum between accuracy/quality and inference speed.
```json
{
"epoch": 8.0,
"eval_loss": 0.3541961908340454,
"eval_matthews_correlation": 0.6579677841732349,
"eval_runtime": 1.9552,
"eval_samples": 1043,
"eval_samples_per_second": 533.451,
"eval_steps_per_second": 33.756
}
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 128
- eval_batch_size: 16
- seed: 22165
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 8.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4004 | 1.0 | 67 | 0.3569 | 0.6340 |
| 0.2843 | 2.0 | 134 | 0.3542 | 0.6580 |
| 0.1228 | 3.0 | 201 | 0.4201 | 0.6412 |
| 0.0989 | 4.0 | 268 | 0.4780 | 0.6757 |
| 0.0681 | 5.0 | 335 | 0.4900 | 0.6925 |
| 0.0506 | 6.0 | 402 | 0.5837 | 0.6785 |
| 0.0093 | 7.0 | 469 | 0.6298 | 0.6652 |
| 0.0244 | 8.0 | 536 | 0.6292 | 0.6750 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.1
|
8cdd516fba1e51d54b23962904911e43
|
eunyounglee/mbart_finetuned_dialect_translation_4
|
eunyounglee
|
mbart
| 15 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,449 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart_finetuned_dialect_translation_4
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0109
- Bleu: 99.3856
- Gen Len: 14.951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.1512 | 1.0 | 938 | 0.0563 | 98.0769 | 14.981 |
| 0.044 | 2.0 | 1876 | 0.0244 | 98.639 | 14.962 |
| 0.0214 | 3.0 | 2814 | 0.0109 | 99.3856 | 14.951 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
785017a1a9f56309694801840ca41da6
|
IlyaGusev/rut5_base_headline_gen_telegram
|
IlyaGusev
|
t5
| 8 | 2,266 |
transformers
| 1 |
summarization
| true | false | false |
apache-2.0
|
['ru']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['summarization']
| false | true | true | 1,019 | false |
# RuT5TelegramHeadlines
## Model description
Based on [rut5-base](https://huggingface.co/cointegrated/rut5-base) model
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "IlyaGusev/rut5_base_headline_gen_telegram"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
article_text = "..."
input_ids = tokenizer(
[article_text],
max_length=600,
add_special_tokens=True,
padding="max_length",
truncation=True,
return_tensors="pt"
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids
)[0]
headline = tokenizer.decode(output_ids, skip_special_tokens=True)
print(headline)
```
## Training data
- Dataset: [ru_all_split.tar.gz](https://www.dropbox.com/s/ykqk49a8avlmnaf/ru_all_split.tar.gz)
## Training procedure
- Training script: [train.py](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/train.py)
|
1858e9a0cd15170ea45a23906c20de5b
|
AAkhilesh/wav2vec2-large-xls-r-300m-hsb-colab
|
AAkhilesh
|
wav2vec2
| 13 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,099 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hsb-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
b9e1ba2dd76bfde97d9d6c306e764249
|
ntsema/wav2vec2-xlsr-53-espeak-cv-ft-mhr3-ntsema-colab
|
ntsema
|
wav2vec2
| 13 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['audiofolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,612 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-espeak-cv-ft-mhr3-ntsema-colab
This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7701
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.329 | 5.79 | 400 | 1.3162 | 1.0 |
| 1.5529 | 11.59 | 800 | 0.6968 | 1.0 |
| 0.8373 | 17.39 | 1200 | 0.7345 | 1.0 |
| 0.4959 | 23.19 | 1600 | 0.7296 | 1.0 |
| 0.3207 | 28.98 | 2000 | 0.7701 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
e5d67c6ffa8fff9bc7e79907a0c41069
|
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-0_nortepeninsular-10_s265
|
jonatasgrosman
|
wav2vec2
| 10 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['es']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'es']
| false | true | true | 495 | false |
# exp_w2v2r_es_xls-r_accent_surpeninsular-0_nortepeninsular-10_s265
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
666102b61c5021f6a17b95ad6763e07a
|
Duskfallcrew/duskfallcomicmixpartdeux
|
Duskfallcrew
| null | 107 | 15 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 906 | false |
### DuskfallComicMixPartDeux Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
Discord https://discord.gg/Da7s8d3KJ7
Do not sell merges, or this model.
Do share, and credit if you use this model.
DO PLS REVIEW AND YELL AT ME IF IT SUCKS!
We never update the images on here anymore
see civit https://civitai.com/user/duskfallcrew
|
74df58f915f9dbe644e7ead4ca0f24f0
|
Helsinki-NLP/opus-mt-fr-sg
|
Helsinki-NLP
|
marian
| 10 | 9 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-fr-sg
* source languages: fr
* target languages: sg
* OPUS readme: [fr-sg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sg/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sg/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sg/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sg | 29.7 | 0.473 |
|
cc0b97ac530dcfd7f164d2ef1c4e28fc
|
Nour33/t5-small-finetuned-samsum
|
Nour33
|
t5
| 9 | 5 |
transformers
| 0 |
text2text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,618 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-samsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7087
- Validation Loss: 1.6756
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 14728, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.1000 | 1.7915 | 0 |
| 1.9259 | 1.7424 | 1 |
| 1.8512 | 1.7167 | 2 |
| 1.8005 | 1.6925 | 3 |
| 1.7655 | 1.6840 | 4 |
| 1.7392 | 1.6799 | 5 |
| 1.7204 | 1.6757 | 6 |
| 1.7087 | 1.6756 | 7 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
cb9bc2aa6674eb71c08c4d801109ea29
|
nielsr/coref-roberta-large
|
nielsr
| null | 6 | 25 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['wikipedia', 'quoref', 'docred', 'fever', 'gap', 'winograd_wsc', 'winogender', 'glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['exbert']
| false | true | true | 2,405 | false |
# CorefRoBERTa large model
Pretrained model on English language using Masked Language Modeling (MLM) and Mention Reference Prediction (MRP) objectives. It was introduced in
[this paper](https://arxiv.org/abs/2004.06870) and first released in
[this repository](https://github.com/thunlp/CorefBERT).
Disclaimer: The team releasing CorefRoBERTa did not write a model card for this model so this model card has been written by me.
## Model description
CorefRoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Mention reference prediction (MRP): this is a novel training task which is proposed to enhance coreferential reasoning ability. MRP utilizes the
mention reference masking strategy to mask one of the repeated mentions and then employs a copybased training objective to predict the masked tokens by copying from other tokens in the sequence.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks, especially those that involve coreference resolution. If you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CorefRoBERTa model as inputs.
### BibTeX entry and citation info
```bibtex
@misc{ye2020coreferential,
title={Coreferential Reasoning Learning for Language Representation},
author={Deming Ye and Yankai Lin and Jiaju Du and Zhenghao Liu and Peng Li and Maosong Sun and Zhiyuan Liu},
year={2020},
eprint={2004.06870},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
8c4793f007d5c737aa051d4a7774002b
|
course5i/SEAD-L-6_H-384_A-12-qqp
|
course5i
|
bert
| 11 | 3 |
transformers
| 0 |
text-classification
| true | true | true |
apache-2.0
|
['en']
|
['glue', 'qqp']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SEAD']
| false | true | true | 3,621 | false |
## Paper
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*
## Abstract
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
## SEAD-L-6_H-384_A-12-qqp
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **qqp** task. For weights initialization, we used [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased)
## All SEAD Checkpoints
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
## Intended uses & limitations
More information needed
### Training hyperparameters
Please take a look at the `training_args.bin` file
```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
```
### Evaluation results
| eval_accuracy | eval_f1 | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples |
|:-------------:|:-------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:|
| 0.9126 | 0.8822 | 23.0122 | 1756.896 | 54.927 | 0.3389 | 40430 |
### Framework versions
- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6
If you use these models, please cite the following paper:
```
@article{article,
author={Mei, Moyan and Sroch, Rohit},
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
volume={3},
number={1},
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
day={26},
year={2022},
month={Feb},
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
}
```
|
dbd15c9236b1879f6330fcc5f010ccea
|
ali2066/finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19
|
ali2066
|
distilbert
| 13 | 6 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,615 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3962
- Accuracy: 0.8231
- F1: 0.8873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 |
| No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 |
| 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 |
| 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 |
| 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
9f913bbe7806a4cc57608b9c2c6554f8
|
minchul/ddpm-ema-flowers-64
|
minchul
| null | 8 | 1 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['huggan/flowers-102-categories']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,223 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-ema-flowers-64
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/flowers-102-categories` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08
- lr_scheduler: cosine
- lr_warmup_steps: 500
- ema_inv_gamma: 1.0
- ema_inv_gamma: 0.75
- ema_inv_gamma: 0.9999
- mixed_precision: no
### Training results
📈 [TensorBoard logs](https://huggingface.co/minchul/ddpm-ema-flowers-64/tensorboard?#scalars)
|
0dc3603f48e220d2db521732391fb1a8
|
Someman/distilbert-base-uncased-finetuned-emotion
|
Someman
|
distilbert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,344 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2186
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3083 | 0.9005 | 0.8972 |
| No log | 2.0 | 500 | 0.2186 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
e34e9b478d1adde1178057f347ab843b
|
thiros/YuzuLemonTea
|
thiros
| null | 12 | 0 | null | 63 |
text-to-image
| false | false | false |
cc0-1.0
| null | null | null | 0 | 0 | 0 | 0 | 2 | 2 | 0 |
['stable-diffusion', 'text-to-image']
| false | true | true | 2,307 | false |
# YuzuLemonTea Mix models ☕
List of my experimental merge models
- [Recommended Settings](#recommended-setteings)
- [YuzuLemonMilk](#yuzulemonmilk)
- [YuzuLemonChaiLatte](#yuzulemonchailatte)
- [YuzuGinger](#yuzuginger)
# important notice(Jan 15/23)
According to bbc-mc's note, there is a possibility of bug that some token(prompt) can be ignored, when merge with "add difference" option.
Milk and ChaiLatte models are now replaced with bug-fix ver.
https://note.com/bbcmc/n/n12c05bf109cc
# Recommended Setteings
VAE: "kl-f8-anime2" and "vae-ft-mse-840000-ema-pruned" are suitable
Steps: 20-30, Sampler: DPM++ SDE Karras or DPM++ 2M Karras, CFG scale: 8, Clip skip: 2, ENSD: 31377, Hires upscale: 2, Hires upscaler: Latent (bicubic antialiased),Denoising strength: 0.54~0.7
Negataive Prompt: (worst quality:2), (low quality:2),inaccurate limb,lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name
- (worst quality), (low quality) are adjustable between 1.4~2.0
- If you don't want 3DCG-ish paint, you can add (3d:0.8)~1.0 in Negative Prompt
# Sample prompt
4girls,(a 3d reader of:0.8) (teenage loli children:1.2), (wearing intricate casual camisole, cute hair ornament,crop jacket,hot pants, tighhigh:1.1),
shiny brown skin,
looking at viewer, (alluring smug:1.2),
dynamic angle,
(onomichi street:1.2),fisheye
<img src="https://i.imgur.com/2JiZwFU.jpg" width="" height="1000">
# YuzuLemonMilk
Block merged model of Anything v3 and some real models.
Rather photo realistic.
Works fine with positive (realistic) and (photo realistic).
<img src="https://i.imgur.com/qYK8DKn.jpg" width="" height="1000">
# YuzuLemonChaiLatte
Combination of a weight merge of ACertainModel and Anything-V3.0, and a block merge of several realistic models.
Rather anime-ish style with realistic background.
- v3.5
<img src="https://i.imgur.com/WLKr3pj.jpg" width="" height="1000">
- v9.5
<img src="https://i.imgur.com/Ufh3JK2.jpg" width="" height="1000">
# YuzuGinger
Add more anime models to YuzuLemonChaiLatte. Can be very anime looks.
- v1
<img src="https://i.imgur.com/4vc4HSL.jpg" width="" height="1000">
- v4
<img src="https://i.imgur.com/M6q6hYp.jpg" width="" height="1000">
|
ddbd9276ea3b8056ce6c086d05f064ca
|
gus1999/distilcamembert-base-finetuned-allocine
|
gus1999
|
camembert
| 11 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
| null |
['allocine']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,294 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilcamembert-base-finetuned-allocine
This model is a fine-tuned version of [cmarkea/distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) on the allocine dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4479 | 1.0 | 157 | 2.2066 |
| 2.3065 | 2.0 | 314 | 2.1144 |
| 2.2567 | 3.0 | 471 | 2.1565 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
6c4ff4860077660322dfd8011efa4901
|
philschmid/tf-distilbart-cnn-12-6
|
philschmid
|
bart
| 9 | 14 |
transformers
| 0 |
summarization
| false | true | false |
apache-2.0
|
['en']
|
['cnn_dailymail', 'xsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization']
| false | true | true | 1,654 | false |
# This is an Tensorflow fork of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6)
### Usage
This checkpoint should be loaded into `BartForConditionalGeneration.from_pretrained`. See the [BART docs](https://huggingface.co/transformers/model_doc/bart.html?#transformers.BartForConditionalGeneration) for more information.
### Metrics for DistilBART models
| Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L |
|:---------------------------|------------:|----------------------:|----------:|----------:|----------:|
| distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 |
| distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 |
| distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 |
| distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 |
| bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 |
| distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 |
| bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 |
| distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 |
| distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 |
| distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 |
|
57822c830cdbd96c129cd4c03ca047b6
|
tuni/xlm-roberta-large-xnli-finetuned-mnli-SJP
|
tuni
|
xlm-roberta
| 14 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null |
['swiss_judgment_prediction']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,511 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-xnli-finetuned-mnli-SJP
This model is a fine-tuned version of [joeddav/xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli) on the swiss_judgment_prediction dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3456
- Accuracy: 0.7957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 5 | 1.8460 | 0.7956 |
| No log | 2.0 | 10 | 1.3456 | 0.7957 |
| No log | 3.0 | 15 | 1.2799 | 0.7957 |
| No log | 4.0 | 20 | 1.2866 | 0.7957 |
| No log | 5.0 | 25 | 1.3162 | 0.7956 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
863e222deba23b5dbf1df194937c6ec2
|
Rocketknight1/europython-imdb-distilbert
|
Rocketknight1
|
distilbert
| 8 | 5 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,330 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# europython-imdb-distilbert
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3081
- Train Accuracy: 0.8663
- Validation Loss: 0.2459
- Validation Accuracy: 0.9006
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3081 | 0.8663 | 0.2459 | 0.9006 | 0 |
### Framework versions
- Transformers 4.21.0.dev0
- TensorFlow 2.9.1
- Datasets 2.3.3.dev0
- Tokenizers 0.11.0
|
1f7be57be443c9aafb5ea712bdf1b29f
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.