modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363548
|
Ahmed-Abousetta
| 2022-10-24T08:45:39Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"unk",
"dataset:Ahmed-Abousetta/autotrain-data-abunawaf-cognition",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-24T08:44:52Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Ahmed-Abousetta/autotrain-data-abunawaf-cognition
co2_eq_emissions:
emissions: 0.9315924025671088
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1859363548
- CO2 Emissions (in grams): 0.9316
## Validation Metrics
- Loss: 0.392
- Accuracy: 0.837
- Precision: 0.787
- Recall: 0.833
- AUC: 0.900
- F1: 0.810
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363548
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363548", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Ahmed-Abousetta/autotrain-abunawaf-cognition-1859363548", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
doodlevelyn/bert-finetuned-ner
|
doodlevelyn
| 2022-10-24T07:13:43Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-22T16:46:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 0.0 | 1.0 | 5280 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0001 | 2.0 | 10560 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 3.0 | 15840 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 4.0 | 21120 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 5.0 | 26400 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
haoanh98/mGPT_base
|
haoanh98
| 2022-10-24T06:35:40Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"feature-extraction",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-10-24T06:01:37Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: mGPT_base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mGPT_base
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Tokenizers 0.13.1
|
thisisHJLee/wav2vec2-large-xls-r-300m-korean-ws1
|
thisisHJLee
| 2022-10-24T06:17:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-10-24T01:36:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-korean-ws1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-korean-ws1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0431
- Cer: 0.0047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.8176 | 1.0 | 4451 | 0.7022 | 0.2494 |
| 0.3505 | 2.0 | 8902 | 0.1369 | 0.0303 |
| 0.1696 | 3.0 | 13353 | 0.0431 | 0.0047 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
mossfarmer/VRANAK
|
mossfarmer
| 2022-10-24T05:48:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-24T05:11:17Z |
---
tags:
- conversational
---
|
kem000123/autotrain-cat_vs_dogs-1858163503
|
kem000123
| 2022-10-24T05:44:23Z | 37 | 2 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:kem000123/autotrain-data-cat_vs_dogs",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-24T05:43:29Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- kem000123/autotrain-data-cat_vs_dogs
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.7950743476524714
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1858163503
- CO2 Emissions (in grams): 0.7951
## Validation Metrics
- Loss: 0.007
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
|
teacookies/autotrain-24102022-cert2-1856563478
|
teacookies
| 2022-10-24T04:33:47Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-24102022-cert2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-24T04:22:25Z |
---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-24102022-cert2
co2_eq_emissions:
emissions: 16.894326665784842
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1856563478
- CO2 Emissions (in grams): 16.8943
## Validation Metrics
- Loss: 0.004
- Accuracy: 0.999
- Precision: 0.961
- Recall: 0.974
- F1: 0.968
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-24102022-cert2-1856563478
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-24102022-cert2-1856563478", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-24102022-cert2-1856563478", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
0xrushi/TestPlaygroundSkops
|
0xrushi
| 2022-10-24T03:48:58Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-16T01:13:19Z |
---
license: mit
---
# Model description 1
[More Information Needed]
## Intended uses & limitations
[More Information Needed]
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|-----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| memory | |
| steps | [('transformation', ColumnTransformer(transformers=[('loading_missing_value_imputer',<br /> SimpleImputer(), ['loading']),<br /> ('numerical_missing_value_imputer',<br /> SimpleImputer(),<br /> ['loading', 'measurement_3', 'measurement_4',<br /> 'measurement_5', 'measurement_6',<br /> 'measurement_7', 'measurement_8',<br /> 'measurement_9', 'measurement_10',<br /> 'measurement_11', 'measurement_12',<br /> 'measurement_13', 'measurement_14',<br /> 'measurement_15', 'measurement_16',<br /> 'measurement_17']),<br /> ('attribute_0_encoder', OneHotEncoder(),<br /> ['attribute_0']),<br /> ('attribute_1_encoder', OneHotEncoder(),<br /> ['attribute_1']),<br /> ('product_code_encoder', OneHotEncoder(),<br /> ['product_code'])])), ('model', DecisionTreeClassifier(max_depth=4))] |
| verbose | False |
| transformation | ColumnTransformer(transformers=[('loading_missing_value_imputer',<br /> SimpleImputer(), ['loading']),<br /> ('numerical_missing_value_imputer',<br /> SimpleImputer(),<br /> ['loading', 'measurement_3', 'measurement_4',<br /> 'measurement_5', 'measurement_6',<br /> 'measurement_7', 'measurement_8',<br /> 'measurement_9', 'measurement_10',<br /> 'measurement_11', 'measurement_12',<br /> 'measurement_13', 'measurement_14',<br /> 'measurement_15', 'measurement_16',<br /> 'measurement_17']),<br /> ('attribute_0_encoder', OneHotEncoder(),<br /> ['attribute_0']),<br /> ('attribute_1_encoder', OneHotEncoder(),<br /> ['attribute_1']),<br /> ('product_code_encoder', OneHotEncoder(),<br /> ['product_code'])]) |
| model | DecisionTreeClassifier(max_depth=4) |
| transformation__n_jobs | |
| transformation__remainder | drop |
| transformation__sparse_threshold | 0.3 |
| transformation__transformer_weights | |
| transformation__transformers | [('loading_missing_value_imputer', SimpleImputer(), ['loading']), ('numerical_missing_value_imputer', SimpleImputer(), ['loading', 'measurement_3', 'measurement_4', 'measurement_5', 'measurement_6', 'measurement_7', 'measurement_8', 'measurement_9', 'measurement_10', 'measurement_11', 'measurement_12', 'measurement_13', 'measurement_14', 'measurement_15', 'measurement_16', 'measurement_17']), ('attribute_0_encoder', OneHotEncoder(), ['attribute_0']), ('attribute_1_encoder', OneHotEncoder(), ['attribute_1']), ('product_code_encoder', OneHotEncoder(), ['product_code'])] |
| transformation__verbose | False |
| transformation__verbose_feature_names_out | True |
| transformation__loading_missing_value_imputer | SimpleImputer() |
| transformation__numerical_missing_value_imputer | SimpleImputer() |
| transformation__attribute_0_encoder | OneHotEncoder() |
| transformation__attribute_1_encoder | OneHotEncoder() |
| transformation__product_code_encoder | OneHotEncoder() |
| transformation__loading_missing_value_imputer__add_indicator | False |
| transformation__loading_missing_value_imputer__copy | True |
| transformation__loading_missing_value_imputer__fill_value | |
| transformation__loading_missing_value_imputer__missing_values | nan |
| transformation__loading_missing_value_imputer__strategy | mean |
| transformation__loading_missing_value_imputer__verbose | 0 |
| transformation__numerical_missing_value_imputer__add_indicator | False |
| transformation__numerical_missing_value_imputer__copy | True |
| transformation__numerical_missing_value_imputer__fill_value | |
| transformation__numerical_missing_value_imputer__missing_values | nan |
| transformation__numerical_missing_value_imputer__strategy | mean |
| transformation__numerical_missing_value_imputer__verbose | 0 |
| transformation__attribute_0_encoder__categories | auto |
| transformation__attribute_0_encoder__drop | |
| transformation__attribute_0_encoder__dtype | <class 'numpy.float64'> |
| transformation__attribute_0_encoder__handle_unknown | error |
| transformation__attribute_0_encoder__sparse | True |
| transformation__attribute_1_encoder__categories | auto |
| transformation__attribute_1_encoder__drop | |
| transformation__attribute_1_encoder__dtype | <class 'numpy.float64'> |
| transformation__attribute_1_encoder__handle_unknown | error |
| transformation__attribute_1_encoder__sparse | True |
| transformation__product_code_encoder__categories | auto |
| transformation__product_code_encoder__drop | |
| transformation__product_code_encoder__dtype | <class 'numpy.float64'> |
| transformation__product_code_encoder__handle_unknown | error |
| transformation__product_code_encoder__sparse | True |
| model__ccp_alpha | 0.0 |
| model__class_weight | |
| model__criterion | gini |
| model__max_depth | 4 |
| model__max_features | |
| model__max_leaf_nodes | |
| model__min_impurity_decrease | 0.0 |
| model__min_samples_leaf | 1 |
| model__min_samples_split | 2 |
| model__min_weight_fraction_leaf | 0.0 |
| model__random_state | |
| model__splitter | best |
</details>
### Model Plot
The model plot is below.
<style>#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 {color: black;background-color: white;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 pre{padding: 0;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-toggleable {background-color: white;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-estimator:hover {background-color: #d4ebff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-item {z-index: 1;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-parallel-item:only-child::after {width: 0;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893 div.sk-text-repr-fallback {display: none;}</style><div id="sk-8bc9e9e7-93eb-4a71-9ad5-6d31c0b7f893" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('transformation',ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(),['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3','measurement_4','measurement_5','measurement_6','measurement_7','measurement_8','measurement_9','measurement_10','measurement_11','measurement_12','measurement_13','measurement_14','measurement_15','measurement_16','measurement_17']),('attribute_0_encoder',OneHotEncoder(),['attribute_0']),('attribute_1_encoder',OneHotEncoder(),['attribute_1']),('product_code_encoder',OneHotEncoder(),['product_code'])])),('model', DecisionTreeClassifier(max_depth=4))])</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="f3a0413c-728e-4fd9-bbd8-5c6ec5312931" type="checkbox" ><label for="f3a0413c-728e-4fd9-bbd8-5c6ec5312931" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('transformation',ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(),['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3','measurement_4','measurement_5','measurement_6','measurement_7','measurement_8','measurement_9','measurement_10','measurement_11','measurement_12','measurement_13','measurement_14','measurement_15','measurement_16','measurement_17']),('attribute_0_encoder',OneHotEncoder(),['attribute_0']),('attribute_1_encoder',OneHotEncoder(),['attribute_1']),('product_code_encoder',OneHotEncoder(),['product_code'])])),('model', DecisionTreeClassifier(max_depth=4))])</pre></div></div></div><div class="sk-serial"><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="3f892f74-5115-4ab0-9c64-f760f11a7cbe" type="checkbox" ><label for="3f892f74-5115-4ab0-9c64-f760f11a7cbe" class="sk-toggleable__label sk-toggleable__label-arrow">transformation: ColumnTransformer</label><div class="sk-toggleable__content"><pre>ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(), ['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3', 'measurement_4','measurement_5', 'measurement_6','measurement_7', 'measurement_8','measurement_9', 'measurement_10','measurement_11', 'measurement_12','measurement_13', 'measurement_14','measurement_15', 'measurement_16','measurement_17']),('attribute_0_encoder', OneHotEncoder(),['attribute_0']),('attribute_1_encoder', OneHotEncoder(),['attribute_1']),('product_code_encoder', OneHotEncoder(),['product_code'])])</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="ec9bebf9-8c02-4785-974c-0e727c4449c0" type="checkbox" ><label for="ec9bebf9-8c02-4785-974c-0e727c4449c0" class="sk-toggleable__label sk-toggleable__label-arrow">loading_missing_value_imputer</label><div class="sk-toggleable__content"><pre>['loading']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="572cc9df-a4bb-49b4-b730-d012d99ba876" type="checkbox" ><label for="572cc9df-a4bb-49b4-b730-d012d99ba876" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="c6058039-3e65-4724-ad03-96517a382ad6" type="checkbox" ><label for="c6058039-3e65-4724-ad03-96517a382ad6" class="sk-toggleable__label sk-toggleable__label-arrow">numerical_missing_value_imputer</label><div class="sk-toggleable__content"><pre>['loading', 'measurement_3', 'measurement_4', 'measurement_5', 'measurement_6', 'measurement_7', 'measurement_8', 'measurement_9', 'measurement_10', 'measurement_11', 'measurement_12', 'measurement_13', 'measurement_14', 'measurement_15', 'measurement_16', 'measurement_17']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="d385b0fd-dfaf-490c-8fda-dc024393a022" type="checkbox" ><label for="d385b0fd-dfaf-490c-8fda-dc024393a022" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="54db5302-69ab-49a1-b939-cb94c0958ab3" type="checkbox" ><label for="54db5302-69ab-49a1-b939-cb94c0958ab3" class="sk-toggleable__label sk-toggleable__label-arrow">attribute_0_encoder</label><div class="sk-toggleable__content"><pre>['attribute_0']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="c0a718c8-7093-4d45-85ae-847bfac3ec7e" type="checkbox" ><label for="c0a718c8-7093-4d45-85ae-847bfac3ec7e" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="993a1233-2b0d-473e-9bb3-f7c9d0bc654a" type="checkbox" ><label for="993a1233-2b0d-473e-9bb3-f7c9d0bc654a" class="sk-toggleable__label sk-toggleable__label-arrow">attribute_1_encoder</label><div class="sk-toggleable__content"><pre>['attribute_1']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="4311756e-5a71-45ce-9005-a1e5448b1c30" type="checkbox" ><label for="4311756e-5a71-45ce-9005-a1e5448b1c30" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="9bfb54df-7509-4669-b6e7-db3520c2d1c4" type="checkbox" ><label for="9bfb54df-7509-4669-b6e7-db3520c2d1c4" class="sk-toggleable__label sk-toggleable__label-arrow">product_code_encoder</label><div class="sk-toggleable__content"><pre>['product_code']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="1acc88d7-a436-40f6-99a3-ebfbbc9f897a" type="checkbox" ><label for="1acc88d7-a436-40f6-99a3-ebfbbc9f897a" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="5626883d-68bc-41b4-8913-23b6aed62eb8" type="checkbox" ><label for="5626883d-68bc-41b4-8913-23b6aed62eb8" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier(max_depth=4)</pre></div></div></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|---------|
# How to Get Started with the Model
Use the code below to get started with the model.
```python
[More Information Needed]
```
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
# h1
tjos osmda
```
# Model 2 Description (Logistic)
---
license: mit
---
# Model description
[More Information Needed]
## Intended uses & limitations
[More Information Needed]
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|-------------------|-----------|
| C | 1.0 |
| class_weight | |
| dual | False |
| fit_intercept | True |
| intercept_scaling | 1 |
| l1_ratio | |
| max_iter | 100 |
| multi_class | auto |
| n_jobs | |
| penalty | l2 |
| random_state | 0 |
| solver | liblinear |
| tol | 0.0001 |
| verbose | 0 |
| warm_start | False |
</details>
### Model Plot
The model plot is below.
<style>#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 {color: black;background-color: white;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 pre{padding: 0;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-toggleable {background-color: white;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-estimator:hover {background-color: #d4ebff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-item {z-index: 1;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-parallel-item:only-child::after {width: 0;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022 div.sk-text-repr-fallback {display: none;}</style><div id="sk-9e32ec08-a06c-47ad-ba8c-72228d2a4022" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>LogisticRegression(random_state=0, solver='liblinear')</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="51d3cd4d-ea90-43e3-8d6a-5abc1df508b6" type="checkbox" checked><label for="51d3cd4d-ea90-43e3-8d6a-5abc1df508b6" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(random_state=0, solver='liblinear')</pre></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|---------|
| accuracy | 0.96 |
| f1 score | 0.96 |
# How to Get Started with the Model
Use the code below to get started with the model.
```python
[More Information Needed]
```
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
# Additional Content
## confusion_matrix

|
salascorp/distilroberta-base-mrpc-glue-oscar-salas7
|
salascorp
| 2022-10-24T02:49:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-24T01:55:00Z |
---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilroberta-base-mrpc-glue-oscar-salas7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue-oscar-salas7
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7444
- Accuracy: 0.2143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cpu
- Datasets 2.6.1
- Tokenizers 0.13.1
|
nickmuchi/setfit-finetuned-financial-text-classification
|
nickmuchi
| 2022-10-24T00:16:02Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-23T18:35:23Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# setfit-finetuned-financial-text-classification
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('nickmuchi/setfit-finetuned-financial-text-classification')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 188 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 5.610085660083046e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 188,
"warmup_steps": 19,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Dimitre/ddpm-ema-flowers-64
|
Dimitre
| 2022-10-24T00:10:31Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/flowers-102-categories",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-23T12:27:21Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/flowers-102-categories
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-ema-flowers-64
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/flowers-102-categories` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: no
### Training results
📈 [TensorBoard logs](https://huggingface.co/Dimitre/ddpm-ema-flowers-64/tensorboard?#scalars)
|
theojolliffe/bart-large-cnn-finetuned-roundup
|
theojolliffe
| 2022-10-23T23:51:01Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-23T15:16:53Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8956
- Rouge1: 58.1914
- Rouge2: 45.822
- Rougel: 49.4407
- Rougelsum: 56.6379
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.2575 | 1.0 | 795 | 0.9154 | 53.8792 | 34.3203 | 35.8768 | 51.1789 | 142.0 |
| 0.7053 | 2.0 | 1590 | 0.7921 | 54.3918 | 35.3346 | 37.7539 | 51.6989 | 142.0 |
| 0.5379 | 3.0 | 2385 | 0.7566 | 52.1651 | 32.5699 | 36.3105 | 49.3327 | 141.5185 |
| 0.3496 | 4.0 | 3180 | 0.7584 | 54.3258 | 36.403 | 39.6938 | 52.0186 | 142.0 |
| 0.2688 | 5.0 | 3975 | 0.7343 | 55.9101 | 39.0709 | 42.4138 | 53.572 | 141.8333 |
| 0.1815 | 6.0 | 4770 | 0.7924 | 53.9272 | 36.8138 | 40.0614 | 51.7496 | 142.0 |
| 0.1388 | 7.0 | 5565 | 0.7674 | 55.0347 | 38.7978 | 42.0081 | 53.0297 | 142.0 |
| 0.1048 | 8.0 | 6360 | 0.7700 | 55.2993 | 39.4075 | 42.6837 | 53.5179 | 141.9815 |
| 0.0808 | 9.0 | 7155 | 0.7796 | 56.1508 | 40.0863 | 43.2178 | 53.7908 | 142.0 |
| 0.0719 | 10.0 | 7950 | 0.8057 | 56.2302 | 41.3004 | 44.7921 | 54.4304 | 142.0 |
| 0.0503 | 11.0 | 8745 | 0.8259 | 55.7603 | 41.0643 | 44.5518 | 54.2305 | 142.0 |
| 0.0362 | 12.0 | 9540 | 0.8604 | 55.8612 | 41.5984 | 44.444 | 54.2493 | 142.0 |
| 0.0307 | 13.0 | 10335 | 0.8516 | 57.7259 | 44.542 | 47.6724 | 56.0166 | 142.0 |
| 0.0241 | 14.0 | 11130 | 0.8826 | 56.7943 | 43.7139 | 47.2866 | 55.1824 | 142.0 |
| 0.0193 | 15.0 | 11925 | 0.8856 | 57.4135 | 44.3147 | 47.9136 | 55.8843 | 142.0 |
| 0.0154 | 16.0 | 12720 | 0.8956 | 58.1914 | 45.822 | 49.4407 | 56.6379 | 142.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/16pxl
|
huggingtweets
| 2022-10-23T23:23:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-23T23:21:33Z |
---
language: en
thumbnail: http://www.huggingtweets.com/16pxl/1666567427101/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1358468632255156224/JtUkil_x_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jubilee ❣️ 2023 CALENDARS OUT NOW</div>
<div style="text-align: center; font-size: 14px;">@16pxl</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jubilee ❣️ 2023 CALENDARS OUT NOW.
| Data | Jubilee ❣️ 2023 CALENDARS OUT NOW |
| --- | --- |
| Tweets downloaded | 3229 |
| Retweets | 288 |
| Short tweets | 228 |
| Tweets kept | 2713 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3r6vcjy6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @16pxl's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wix5go1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wix5go1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/16pxl')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Solosolos/Fantasy
|
Solosolos
| 2022-10-23T22:33:18Z | 0 | 0 | null |
[
"doi:10.57967/hf/0057",
"region:us"
] | null | 2022-10-23T21:05:51Z |
language:
- "List of ISO 639-1 code for your language"
- lang1
- lang2
thumbnail: "url to a thumbnail used in social sharing"
tags:
- tag1
- tag2
license: "any valid license identifier"
datasets:
- dataset1
- dataset2
metrics:
- metric1
- metric2
|
salascorp/distilroberta-base-mrpc-glue-oscar-salas3
|
salascorp
| 2022-10-23T22:20:24Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-23T22:08:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-mrpc-glue-oscar-salas3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue-oscar-salas3
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cpu
- Datasets 2.6.1
- Tokenizers 0.13.1
|
rufimelo/Legal-BERTimbau-base
|
rufimelo
| 2022-10-23T22:07:02Z | 1,612 | 14 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"pt",
"dataset:rufimelo/PortugueseLegalSentences-v0",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-29T16:11:40Z |
---
language:
- pt
thumbnail: "Portugues BERT for the Legal Domain"
tags:
- bert
- pytorch
datasets:
- rufimelo/PortugueseLegalSentences-v0
license: "mit"
widget:
- text: "O advogado apresentou [MASK] ao juíz."
---
# Legal_BERTimbau
## Introduction
Legal_BERTimbau Large is a fine-tuned BERT model based on [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) Large.
"BERTimbau Base is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large.
For further information or requests, please go to [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/)."
The performance of Language Models can change drastically when there is a domain shift between training and test data. In order create a Portuguese Language Model adapted to a Legal domain, the original BERTimbau model was submitted to a fine-tuning stage where it was performed 1 "PreTraining" epoch over 30 000 legal Portuguese Legal documents available online.
## Available models
| Model | Arch. | #Layers | #Params |
| ---------------------------------------- | ---------- | ------- | ------- |
| `rufimelo/Legal-BERTimbau-base` | BERT-Base |12 |110M|
| `rufimelo/Legal-BERTimbau-large` | BERT-Large | 24 | 335M |
## Usage
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("rufimelo/Legal-BERTimbau-base")
model = AutoModelForMaskedLM.from_pretrained("rufimelo/Legal-BERTimbau-base")
```
### Masked language modeling prediction example
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("rufimelo/Legal-BERTimbau-base")
model = AutoModelForMaskedLM.from_pretrained("rufimelo/Legal-BERTimbau-base")
pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer)
pipe('O advogado apresentou [MASK] para o juíz')
# [{'score': 0.5034703612327576,
#'token': 8190,
#'token_str': 'recurso',
#'sequence': 'O advogado apresentou recurso para o juíz'},
#{'score': 0.07347951829433441,
#'token': 21973,
#'token_str': 'petição',
#'sequence': 'O advogado apresentou petição para o juíz'},
#{'score': 0.05165359005331993,
#'token': 4299,
#'token_str': 'resposta',
#'sequence': 'O advogado apresentou resposta para o juíz'},
#{'score': 0.04611917585134506,
#'token': 5265,
#'token_str': 'exposição',
#'sequence': 'O advogado apresentou exposição para o juíz'},
#{'score': 0.04068068787455559,
#'token': 19737, 'token_str':
#'alegações',
#'sequence': 'O advogado apresentou alegações para o juíz'}]
```
### For BERT embeddings
```python
import torch
from transformers import AutoModel
model = AutoModel.from_pretrained('rufimelo/Legal-BERTimbau-base')
input_ids = tokenizer.encode('O advogado apresentou recurso para o juíz', return_tensors='pt')
with torch.no_grad():
outs = model(input_ids)
encoded = outs[0][0, 1:-1]
#tensor([[ 0.0328, -0.4292, -0.6230, ..., -0.3048, -0.5674, 0.0157],
#[-0.3569, 0.3326, 0.7013, ..., -0.7778, 0.2646, 1.1310],
#[ 0.3169, 0.4333, 0.2026, ..., 1.0517, -0.1951, 0.7050],
#...,
#[-0.3648, -0.8137, -0.4764, ..., -0.2725, -0.4879, 0.6264],
#[-0.2264, -0.1821, -0.3011, ..., -0.5428, 0.1429, 0.0509],
#[-1.4617, 0.6281, -0.0625, ..., -1.2774, -0.4491, 0.3131]])
```
## Citation
If you use this work, please cite BERTimbau's work:
```bibtex
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
```
|
rufimelo/Legal-BERTimbau-large
|
rufimelo
| 2022-10-23T22:05:10Z | 61 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"pt",
"dataset:rufimelo/PortugueseLegalSentences-v0",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-24T22:29:50Z |
---
language:
- pt
thumbnail: "Portugues BERT for the Legal Domain"
tags:
- bert
- pytorch
datasets:
- rufimelo/PortugueseLegalSentences-v0
license: "mit"
widget:
- text: "O advogado apresentou [MASK] ao juíz."
---
# Legal_BERTimbau
## Introduction
Legal_BERTimbau Large is a fine-tuned BERT model based on [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) Large.
"BERTimbau Base is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large.
For further information or requests, please go to [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/)."
The performance of Language Models can change drastically when there is a domain shift between training and test data. In order create a Portuguese Language Model adapted to a Legal domain, the original BERTimbau model was submitted to a fine-tuning stage where it was performed 1 "PreTraining" epoch over 30 000 legal Portuguese Legal documents available online. (lr: 1e-5)
## Available models
| Model | Arch. | #Layers | #Params |
| ---------------------------------------- | ---------- | ------- | ------- |
|`rufimelo/Legal-BERTimbau-base` |BERT-Base |12 |110M|
| `rufimelo/Legal-BERTimbau-large` | BERT-Large | 24 | 335M |
## Usage
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("rufimelo/Legal-BERTimbau-large")
model = AutoModelForMaskedLM.from_pretrained("rufimelo/Legal-BERTimbau-large")
```
### Masked language modeling prediction example
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("rufimelo/Legal-BERTimbau-large")
model = AutoModelForMaskedLM.from_pretrained("rufimelo/Legal-BERTimbau-large")
pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer)
pipe('O advogado apresentou [MASK] para o juíz')
# [{'score': 0.5034703612327576,
#'token': 8190,
#'token_str': 'recurso',
#'sequence': 'O advogado apresentou recurso para o juíz'},
#{'score': 0.07347951829433441,
#'token': 21973,
#'token_str': 'petição',
#'sequence': 'O advogado apresentou petição para o juíz'},
#{'score': 0.05165359005331993,
#'token': 4299,
#'token_str': 'resposta',
#'sequence': 'O advogado apresentou resposta para o juíz'},
#{'score': 0.04611917585134506,
#'token': 5265,
#'token_str': 'exposição',
#'sequence': 'O advogado apresentou exposição para o juíz'},
#{'score': 0.04068068787455559,
#'token': 19737, 'token_str':
#'alegações',
#'sequence': 'O advogado apresentou alegações para o juíz'}]
```
### For BERT embeddings
```python
import torch
from transformers import AutoModel
model = AutoModel.from_pretrained('rufimelo/Legal-BERTimbau-large')
input_ids = tokenizer.encode('O advogado apresentou recurso para o juíz', return_tensors='pt')
with torch.no_grad():
outs = model(input_ids)
encoded = outs[0][0, 1:-1]
#tensor([[ 0.0328, -0.4292, -0.6230, ..., -0.3048, -0.5674, 0.0157],
#[-0.3569, 0.3326, 0.7013, ..., -0.7778, 0.2646, 1.1310],
#[ 0.3169, 0.4333, 0.2026, ..., 1.0517, -0.1951, 0.7050],
#...,
#[-0.3648, -0.8137, -0.4764, ..., -0.2725, -0.4879, 0.6264],
#[-0.2264, -0.1821, -0.3011, ..., -0.5428, 0.1429, 0.0509],
#[-1.4617, 0.6281, -0.0625, ..., -1.2774, -0.4491, 0.3131]])
```
## Citation
If you use this work, please cite BERTimbau's work:
```bibtex
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
```
|
Yuelin/bert-finetuned-ner
|
Yuelin
| 2022-10-23T20:30:31Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-22T21:38:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9355853618148701
- name: Recall
type: recall
value: 0.9508582968697409
- name: F1
type: f1
value: 0.9431600033386196
- name: Accuracy
type: accuracy
value: 0.9870636368988049
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0598
- Precision: 0.9356
- Recall: 0.9509
- F1: 0.9432
- Accuracy: 0.9871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0861 | 1.0 | 1756 | 0.0653 | 0.9138 | 0.9334 | 0.9235 | 0.9825 |
| 0.0354 | 2.0 | 3512 | 0.0589 | 0.9312 | 0.9497 | 0.9403 | 0.9866 |
| 0.0165 | 3.0 | 5268 | 0.0598 | 0.9356 | 0.9509 | 0.9432 | 0.9871 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
ViktorDo/SciBERT-POWO_Growth_Form_Finetuned
|
ViktorDo
| 2022-10-23T19:23:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-23T17:45:10Z |
---
tags:
- generated_from_trainer
model-index:
- name: SciBERT-POWO_Growth_Form_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SciBERT-POWO_Growth_Form_Finetuned
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2707 | 1.0 | 2160 | 0.2636 |
| 0.2385 | 2.0 | 4320 | 0.2418 |
| 0.2086 | 3.0 | 6480 | 0.2566 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
mrmoor/cti-bert-ner
|
mrmoor
| 2022-10-23T19:17:18Z | 28 | 1 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-23T18:33:27Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mrmoor/cti-bert-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mrmoor/cti-bert-ner
This model is a fine-tuned version of [mrmoor/cti-bert-mlm](https://huggingface.co/mrmoor/cti-bert-mlm) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1491
- Validation Loss: 0.3715
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 82800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.6883 | 0.5161 | 0 |
| 0.4567 | 0.4283 | 1 |
| 0.3420 | 0.3810 | 2 |
| 0.2688 | 0.3845 | 3 |
| 0.2144 | 0.3669 | 4 |
| 0.1788 | 0.3881 | 5 |
| 0.1491 | 0.3715 | 6 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
luisespinosa/definition-modeling-v2
|
luisespinosa
| 2022-10-23T19:15:08Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-23T17:23:22Z |
This version is trained on 3 epochs on the full dataset without wikt & wn.
|
huggingtweets/o91_bot
|
huggingtweets
| 2022-10-23T18:25:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-23T18:21:28Z |
---
language: en
thumbnail: http://www.huggingtweets.com/o91_bot/1666549473734/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1544382829961805825/Piup4HJT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Frei Bot</div>
<div style="text-align: center; font-size: 14px;">@o91_bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Frei Bot.
| Data | Frei Bot |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 11 |
| Short tweets | 338 |
| Tweets kept | 2901 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3hnd8n8j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @o91_bot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/28wc351p) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/28wc351p/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/o91_bot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
NikitaBaramiia/q-Taxi-v3
|
NikitaBaramiia
| 2022-10-23T18:07:23Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-23T17:48:05Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="NikitaBaramiia/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
NikitaBaramiia/q-FrozenLake-v1-4x4-noSlippery
|
NikitaBaramiia
| 2022-10-23T18:04:10Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-23T17:45:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NikitaBaramiia/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
pepa/roberta-small-fever
|
pepa
| 2022-10-23T17:53:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:copenlu/fever_gold_evidence",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-23T17:27:02Z |
---
tags:
- generated_from_trainer
model-index:
- name: roberta-small-fever
results: []
datasets:
- copenlu/fever_gold_evidence
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-small-fever
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6096
- eval_p: 0.8179
- eval_r: 0.8110
- eval_f1: 0.8104
- eval_runtime: 36.258
- eval_samples_per_second: 518.644
- eval_steps_per_second: 64.841
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.12.1
|
sd-concepts-library/xioboma
|
sd-concepts-library
| 2022-10-23T17:51:13Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-23T17:51:03Z |
---
license: mit
---
### xioboma on Stable Diffusion
This is the `<xi-obama>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
patrickvonplaten/carol_model
|
patrickvonplaten
| 2022-10-23T17:49:06Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-23T17:56:14Z |
---
license: mit
---
### Carol on Stable Diffusion
This is the `<carol>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`.
|
valhalla/SwinIR-real-sr-L-x4-GAN
|
valhalla
| 2022-10-23T17:44:53Z | 1 | 2 |
transformers
|
[
"transformers",
"jax",
"swin-ir",
"region:us"
] | null | 2022-10-23T15:43:39Z |
---
tags:
- swin-ir
inference: false
---
|
valhalla/SwinIR-real-sr-M-x4-PSNR
|
valhalla
| 2022-10-23T17:44:14Z | 1 | 0 |
transformers
|
[
"transformers",
"jax",
"swin-ir",
"region:us"
] | null | 2022-10-23T15:44:44Z |
---
tags:
- swin-ir
inference: false
---
|
srSergio/bakerzduzen-artstyle
|
srSergio
| 2022-10-23T17:33:03Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-23T17:33:03Z |
---
license: creativeml-openrail-m
---
|
pepa/bigbird-roberta-base-snli
|
pepa
| 2022-10-23T17:11:57Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"big_bird",
"text-classification",
"generated_from_trainer",
"dataset:snli",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-23T17:11:06Z |
---
tags:
- generated_from_trainer
datasets:
- snli
model-index:
- name: bigbird-roberta-base-snli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird-roberta-base-snli
This model was trained from scratch on the snli dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2738
- eval_p: 0.9034
- eval_r: 0.9033
- eval_f1: 0.9033
- eval_runtime: 10.9262
- eval_samples_per_second: 899.126
- eval_steps_per_second: 56.195
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.12.1
|
pepa/deberta-v3-base-snli
|
pepa
| 2022-10-23T17:10:14Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:snli",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-23T17:09:12Z |
---
tags:
- generated_from_trainer
datasets:
- snli
model-index:
- name: deberta-v3-base-snli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-snli
This model was trained from scratch on the snli dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2516
- eval_p: 0.9171
- eval_r: 0.9170
- eval_f1: 0.9170
- eval_runtime: 13.4107
- eval_samples_per_second: 732.551
- eval_steps_per_second: 45.784
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.12.1
|
hieuit7/wav2vec2-common_voice-vi-demo
|
hieuit7
| 2022-10-23T17:04:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"vi",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-10-23T15:48:50Z |
---
language:
- vi
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-vi-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-vi-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - VI dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4768
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| No log | 7.67 | 100 | 5.9657 | 1.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu116
- Datasets 2.6.1
- Tokenizers 0.13.1
|
thothai/turkce-kufur-tespiti
|
thothai
| 2022-10-23T16:55:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"license:afl-3.0",
"endpoints_compatible",
"region:us"
] | null | 2022-10-23T16:45:09Z |
---
license: afl-3.0
---
# Thoth Ai, Türkçe hakaret ve küfürleri tespit etmek için oluşturulmuştur. Akademik projelerde kaynak gösterilmesi halinde kullanılabilir.
## Validation Metrics
- Loss: 0.230
- Accuracy: 0.936
- Macro F1: 0.927
- Micro F1: 0.936
- Weighted F1: 0.936
- Macro Precision: 0.929
- Micro Precision: 0.936
- Weighted Precision: 0.936
- Macro Recall: 0.925
- Micro Recall: 0.936
- Weighted Recall: 0.936
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("thothai/turkce-kufur-tespiti", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("thothai/turkce-kufur-tespiti", use_auth_token=True)
inputs = tokenizer("Merhaba", return_tensors="pt")
outputs = model(**inputs)
```
|
k4tel/bert-geolocation-prediction
|
k4tel
| 2022-10-23T16:21:11Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2022-10-23T13:10:50Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-geolocation-prediction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-geolocation-prediction
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.23.1
- Pytorch 1.11.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
situlla/ppo-LunarLander-v2
|
situlla
| 2022-10-23T16:00:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-20T12:49:56Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 292.79 +/- 16.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ddebnath/layoutlmv3-finetuned-cord_100
|
ddebnath
| 2022-10-23T15:37:39Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:cord-layoutlmv3",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-23T14:42:28Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_100
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord-layoutlmv3
type: cord-layoutlmv3
config: cord
split: train
args: cord
metrics:
- name: Precision
type: precision
value: 0.9485842026825634
- name: Recall
type: recall
value: 0.9528443113772455
- name: F1
type: f1
value: 0.9507094846900671
- name: Accuracy
type: accuracy
value: 0.9592529711375212
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1978
- Precision: 0.9486
- Recall: 0.9528
- F1: 0.9507
- Accuracy: 0.9593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.56 | 250 | 0.9543 | 0.7832 | 0.8166 | 0.7996 | 0.8226 |
| 1.3644 | 3.12 | 500 | 0.5338 | 0.8369 | 0.8683 | 0.8523 | 0.8824 |
| 1.3644 | 4.69 | 750 | 0.3658 | 0.8840 | 0.9072 | 0.8955 | 0.9232 |
| 0.3802 | 6.25 | 1000 | 0.3019 | 0.9156 | 0.9251 | 0.9203 | 0.9334 |
| 0.3802 | 7.81 | 1250 | 0.2833 | 0.9094 | 0.9237 | 0.9165 | 0.9346 |
| 0.2061 | 9.38 | 1500 | 0.2241 | 0.9377 | 0.9469 | 0.9423 | 0.9525 |
| 0.2061 | 10.94 | 1750 | 0.2282 | 0.9304 | 0.9409 | 0.9356 | 0.9474 |
| 0.1416 | 12.5 | 2000 | 0.2017 | 0.9509 | 0.9566 | 0.9537 | 0.9610 |
| 0.1416 | 14.06 | 2250 | 0.2006 | 0.9472 | 0.9536 | 0.9504 | 0.9614 |
| 0.1056 | 15.62 | 2500 | 0.1978 | 0.9486 | 0.9528 | 0.9507 | 0.9593 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Mhd/q-FrozenLake-v1-4x4-noSlippery
|
Mhd
| 2022-10-23T15:21:59Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-23T15:21:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Mhd/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
adithya12/monkeypox-model-lin
|
adithya12
| 2022-10-23T13:45:38Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2022-10-23T13:44:22Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 0.0010000000474974513 |
| decay | 0.0 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
k4tel/bert-multilingial-geolocation-prediction
|
k4tel
| 2022-10-23T12:55:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-22T12:58:09Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-multilingial-geolocation-prediction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-multilingial-geolocation-prediction
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.23.1
- Pytorch 1.11.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
teacookies/autotrain-231022022-cert4-1847463269
|
teacookies
| 2022-10-23T10:35:22Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-231022022-cert4",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-23T10:24:52Z |
---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-231022022-cert4
co2_eq_emissions:
emissions: 17.781243387408683
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1847463269
- CO2 Emissions (in grams): 17.7812
## Validation Metrics
- Loss: 0.004
- Accuracy: 0.999
- Precision: 0.955
- Recall: 0.969
- F1: 0.962
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-231022022-cert4-1847463269
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-231022022-cert4-1847463269", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-231022022-cert4-1847463269", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
XaviXva/distilbert-base-uncased-finetuned-emotion
|
XaviXva
| 2022-10-23T08:38:15Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-23T08:04:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9273096319590406
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2179
- Accuracy: 0.9275
- F1: 0.9273
## Model description
More information needed
## Intended uses & limitations
This is only a test to get started with NLP and transformers. Just for fun!
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8479 | 1.0 | 250 | 0.3281 | 0.894 | 0.8887 |
| 0.254 | 2.0 | 500 | 0.2179 | 0.9275 | 0.9273 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
sd-concepts-library/gta5-artwork
|
sd-concepts-library
| 2022-10-23T03:32:51Z | 0 | 31 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-23T03:32:39Z |
---
license: mit
---
### GTA5 Artwork on Stable Diffusion
This is the `<gta5-artwork>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:














|
and111/bert_base_uncased_for_pretraining
|
and111
| 2022-10-22T21:25:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"endpoints_compatible",
"region:us"
] | null | 2022-10-22T17:56:03Z |
https://huggingface.co/bert-base-uncased model pre-trained on dataset https://huggingface.co/datasets/and111/bert_pretrain_phase2 until loss reached 1.96.
|
brikwerk/image-difference-segmentation
|
brikwerk
| 2022-10-22T20:24:18Z | 0 | 2 | null |
[
"binary_segmentation",
"image_differences",
"license:mit",
"region:us"
] | null | 2022-10-22T19:52:36Z |
---
tags:
- binary_segmentation
- image_differences
license: "mit"
---
# Image Difference Segmentation
For the main repository and code, please refer to the [GitHub Repo](https://github.com/Brikwerk/image-difference-segmentation).
This project enables creation of large binary segmentation datasets through use of image differences. Certain domains, such as comic books or manga, take particularly well to the proposed approach. Creating a dataset and training a segmentation model involves two manual steps (outside of the code in this repository):
1. Finding and sorting suitable data. Ideally, your data should have two or more classes wherein the only difference between the classes should be the subject that is to be segmented. An example would be an English page from a comic and a French page from the same comic.
2. Segmentation masks must be manually created for a small number of image differences. Using a pretrained DiffNet requires only 20-50 new masks. Re-training DiffNet from scratch requires 100-200 masks. For quickly generating binary segmentation masks, [simple-masker](https://github.com/Brikwerk/simple-masker) was written/used.
## Prerequisites
The following must be on your system:
- Python 3.6+
- An accompanying Pip installation
- Python and Pip must be accessible from the command line
- An NVIDIA GPU that is CUDA-capable (6GB+ of VRAM likely needed)
## Using a Pretrained Model
### Downloading the Weights File
Weights for this project are hosted at [HuggingFace](https://huggingface.co/brikwerk/image-difference-segmentation) under `weights` directory. Currently, a DiffNet instance trained on text differences is provided. To use this model, download it and move it to the weights directory in your local copy of this repository.
### Using Pretrained Weights
Pretrained weights can be used in the `batch_process.py` file and the `evaluate.py` file. For both files, specify the path to your weights file using the `--weights_path` CLI argument.
## License
MIT
|
theodotus/stt_uk_squeezeformer_ctc_sm
|
theodotus
| 2022-10-22T19:19:20Z | 9 | 2 |
nemo
|
[
"nemo",
"automatic-speech-recognition",
"uk",
"dataset:mozilla-foundation/common_voice_10_0",
"dataset:Yehor/voa-uk-transcriptions",
"license:bsd-3-clause",
"model-index",
"region:us"
] |
automatic-speech-recognition
| 2022-09-24T08:43:42Z |
---
language:
- uk
library_name: nemo
datasets:
- mozilla-foundation/common_voice_10_0
- Yehor/voa-uk-transcriptions
tags:
- automatic-speech-recognition
model-index:
- name: stt_uk_squeezeformer_ctc_sm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 10.0
type: mozilla-foundation/common_voice_10_0
config: clean
split: test
args:
language: uk
metrics:
- name: Test WER
type: wer
value: 7.557
license: bsd-3-clause
---
# Squeezeformer-CTC SM (uk-UA)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets) |
|
weicap/eee
|
weicap
| 2022-10-22T18:12:18Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-08T02:34:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: eee
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eee
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7548
- Accuracy: 0.8162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6014 | 1.0 | 154 | 0.5832 | 0.7080 |
| 0.4314 | 2.0 | 308 | 0.5388 | 0.7956 |
| 0.38 | 3.0 | 462 | 0.4447 | 0.7518 |
| 0.0704 | 4.0 | 616 | 0.7324 | 0.8175 |
| 0.015 | 5.0 | 770 | 0.8301 | 0.8394 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
SergioVillanueva/autotrain-person-intruder-classification-1840363138
|
SergioVillanueva
| 2022-10-22T15:13:21Z | 36 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:SergioVillanueva/autotrain-data-person-intruder-classification",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-22T15:12:43Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- SergioVillanueva/autotrain-data-person-intruder-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.5267790340228428
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1840363138
- CO2 Emissions (in grams): 0.5268
## Validation Metrics
- Loss: 0.464
- Accuracy: 0.818
- Precision: 0.778
- Recall: 1.000
- AUC: 1.000
- F1: 0.875
|
anish-shilpakar/asr
|
anish-shilpakar
| 2022-10-22T14:14:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-22T06:46:37Z |
Automatic Nepali Speech Recognition
|
wd255/ddpm-butterflies-128
|
wd255
| 2022-10-22T12:53:19Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-22T06:42:27Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/wd255/ddpm-butterflies-128/tensorboard?#scalars)
|
sihyun/myfirst
|
sihyun
| 2022-10-22T10:43:23Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-10-22T10:43:23Z |
---
license: bigscience-openrail-m
---
|
darshana1406/xlm-roberta-base-finetuned-squad
|
darshana1406
| 2022-10-22T10:27:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-22T07:46:17Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: xlm-roberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-squad
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0917 | 1.0 | 5600 | 0.9840 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
tomthekkan/mt5-small-finetuned-amazon-en-es
|
tomthekkan
| 2022-10-22T10:08:57Z | 9 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-22T09:13:13Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: tomthekkan/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tomthekkan/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.1138
- Validation Loss: 3.3816
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.9822 | 4.2802 | 0 |
| 5.9654 | 3.7811 | 1 |
| 5.2343 | 3.6557 | 2 |
| 4.8190 | 3.5433 | 3 |
| 4.5149 | 3.4695 | 4 |
| 4.3105 | 3.4202 | 5 |
| 4.1907 | 3.3909 | 6 |
| 4.1138 | 3.3816 | 7 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
bchaipats/distilbert-base-uncased-finetuned-ner
|
bchaipats
| 2022-10-22T09:36:42Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-22T09:10:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9247846255798542
- name: Recall
type: recall
value: 0.9366819554760041
- name: F1
type: f1
value: 0.9306952703829268
- name: Accuracy
type: accuracy
value: 0.9834622777892513
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0627
- Precision: 0.9248
- Recall: 0.9367
- F1: 0.9307
- Accuracy: 0.9835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.245 | 1.0 | 878 | 0.0708 | 0.9130 | 0.9196 | 0.9163 | 0.9810 |
| 0.0538 | 2.0 | 1756 | 0.0636 | 0.9220 | 0.9350 | 0.9285 | 0.9827 |
| 0.0297 | 3.0 | 2634 | 0.0627 | 0.9248 | 0.9367 | 0.9307 | 0.9835 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.0
- Tokenizers 0.13.1
|
huggingtweets/ouvessvit
|
huggingtweets
| 2022-10-22T09:34:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-22T09:33:50Z |
---
language: en
thumbnail: http://www.huggingtweets.com/ouvessvit/1666431286897/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1539686183927795712/_V9skTmk_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Natalie Godec 🇺🇦</div>
<div style="text-align: center; font-size: 14px;">@ouvessvit</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Natalie Godec 🇺🇦.
| Data | Natalie Godec 🇺🇦 |
| --- | --- |
| Tweets downloaded | 1043 |
| Retweets | 74 |
| Short tweets | 83 |
| Tweets kept | 886 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2yoysr8v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ouvessvit's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3q5y5xzk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3q5y5xzk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ouvessvit')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Gatozu35/stable-diffusion-savedmodel
|
Gatozu35
| 2022-10-22T09:01:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-22T09:01:57Z |
---
license: creativeml-openrail-m
---
|
Nobody138/xlm-roberta-base-finetuned-panx-all
|
Nobody138
| 2022-10-22T08:30:29Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-22T07:58:56Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1745
- F1: 0.8505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3055 | 1.0 | 835 | 0.1842 | 0.8099 |
| 0.1561 | 2.0 | 1670 | 0.1711 | 0.8452 |
| 0.1016 | 3.0 | 2505 | 0.1745 | 0.8505 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
somemusicnerdwoops/DialoGPT-distilgpt2-sonicfandub
|
somemusicnerdwoops
| 2022-10-22T08:06:05Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-22T07:30:03Z |
---
tags:
- conversational
- text-generation
---
|
Nobody138/xlm-roberta-base-finetuned-panx-en
|
Nobody138
| 2022-10-22T07:58:41Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-22T07:40:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: train
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6886160714285715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4043
- F1: 0.6886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1347 | 1.0 | 50 | 0.5771 | 0.4880 |
| 0.5066 | 2.0 | 100 | 0.4209 | 0.6582 |
| 0.3631 | 3.0 | 150 | 0.4043 | 0.6886 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Harmony21/Corder
|
Harmony21
| 2022-10-22T07:47:01Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-10-22T07:47:01Z |
---
license: bigscience-bloom-rail-1.0
---
|
waifu-research-department/senko
|
waifu-research-department
| 2022-10-22T07:28:55Z | 0 | 4 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-14T15:51:45Z |
---
license: mit
---
# Description
Trainer: **JawGBoi**
Senko-san from Sewayaki Kitsune no Senko-san
[Model download](https://huggingface.co/waifu-research-department/senko/blob/main/Senko_V1_training_images_3600_max_training_steps_Senko_token_Anime_Girl_class_word.ckpt)
# Training
> Senko: 30 images</br>
> Regularisation: 126 images</br>
> Steps: 3600</br>
> Model Used: Waifu Diffusion 1.3
> Keyword: Senko (Use this in the prompt)</br>
> Class Phrase: Anime_Girl (Also use this in the prompt!)
# Sample Prompt
> **Prompt:** Senko Anime_Girl 1girl, ((waving hello)), smiling, high quality, hires, sharp focus, sharp image</br>
> **Negative Prompt:** Low quality, blur, blurry, JPEG artefacts, out of frame, head out of frame, bad anatomy, disfigured, deformed, malformed, mutant, gross, disgusting, poorly drawn, extra limbs, extra fingers, missing limbs, four fingers, three fingers




|
Nobody138/xlm-roberta-base-finetuned-panx-fr
|
Nobody138
| 2022-10-22T07:12:34Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-22T06:51:51Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: train
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8346456692913387
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2763
- F1: 0.8346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5779 | 1.0 | 191 | 0.3701 | 0.7701 |
| 0.2735 | 2.0 | 382 | 0.2908 | 0.8254 |
| 0.1769 | 3.0 | 573 | 0.2763 | 0.8346 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Iris21/ai
|
Iris21
| 2022-10-22T06:31:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-17T14:31:24Z |
## 1 - 配置环境
### 1.0 测试显卡
!nvidia-smi -L
### 1.1 下载安装依赖
setup miniconda
import sys
!wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
!chmod +x Miniconda3-latest-Linux-x86_64.sh
!bash ./Miniconda3-latest-Linux-x86_64.sh -b -f -p /usr/local
sys.path.append('/usr/local/lib/python3.7/site-packages/')
!rm Miniconda3-latest-Linux-x86_64.sh
### 1.2 设置环境
Setup environment, Gfpgan and Real-ESRGAN. Takes about 5-6 minutes
#@markdown ### Set up conda environment - Takes a while
!conda env update -n base -f /content/stable-diffusion/environment.yaml
### 1.3 设置CFPGan和ESRGAN
#@markdown ### Build upscalers support
#@markdown **GFPGAN** Automatically correct distorted faces with a built-in GFPGAN option, fixes them in less than half a second
#@markdown **ESRGAN** Boosts the resolution of images with a built-in RealESRGAN option
#@markdown LDSR and GoBig enable amazing upscale options in the new Image Lab
add_CFP = True #@param {type:"boolean"}
add_ESR = True #@param {type:"boolean"}
add_LDSR = False #@param {type:"boolean"}
#@markdown ⚠️ LDSR is 1.9GB and make take time to download
if add_CFP:
%cd /content/stable-diffusion/src/gfpgan/
!pip install basicsr facexlib yapf lmdb opencv-python pyyaml tb-nightly --no-deps
!python setup.py develop
!pip install realesrgan
!wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P experiments/pretrained_models
if add_ESR:
%cd /content/stable-diffusion/src/realesrgan/
!wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models
!wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P experiments/pretrained_models
if add_LDSR:
%cd /content/stable-diffusion/src
!git clone https://github.com/devilismyfriend/latent-diffusion
%cd latent-diffusion
%mkdir -p experiments/
%cd experiments/
%mkdir -p pretrained_models
%cd pretrained_models
#project.yaml download
!wget -O project.yaml https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1
#model.ckpt model download
!wget -O model.ckpt https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1
%cd /content/stable-diffusion/
!wget https://github.com/matomo-org/travis-scripts/blob/master/fonts/Arial.ttf?raw=true -O arial.ttf
#2.配置NovelAI
**可以展开配置密码**,否则自动随机生成一个
每次更改需要运行一席下面单元格代码
## 下载复制文件
最快也得4分钟,稍等
如果执行失败,重新执行第二步和第三步即可
!sudo apt-get install aria2
!sudo apt-get install file
!mkdir /content/time
!git clone https://github.com/pnpnpn/timeout-decorator.git /content/time
%cd /content/time
!pwd
!ls -l
# 下载NA
%cd /content/time
import timeout_decorator
outTime=180
@timeout_decorator.timeout(outTime)
def downNovelAI():
!rm -rf /content/n2
!mkdir /content/n2
%cd /content/n2
!aria2c "magnet:?xt=urn:btih:4a4b483d4a5840b6e1fee6b0ca1582c979434e4d&dn=naifu&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce"
def checkFile():
!file /content/n2/naifu/models/animefull-final-pruned/model.ckpt>fileinfo
!file /content/n2/naifu/models/animevae.pt>fileinfo2
f1=open("fileinfo")
res1=f1.read()
f1.close
f2=open("fileinfo2")
res2=f2.read()
f2.close
return "Zip" in res1 and "Zip" in res2
while 1:
try:
downNovelAI()
except:
if checkFile():
print("下载完成")
outTime+=60
break
else:
print("下载未完成,自动重试")
# 下载WebUI
!mkdir /content/novelai
%cd /content/novelai
!git clone https://github.com/RyensX/stable-diffusion-webui-zh /content/novelai
%cd /content/novelai
!git checkout -b master
# 复制模型
!cp /content/n2/naifu/models/animefull-final-pruned/model.ckpt /content/novelai/models/Stable-diffusion/
!cp /content/n2/naifu/models/animevae.pt /content/novelai/models/Stable-diffusion/model.pt
!mkdir -p /content/novelai/train_images/raw/
!mkdir -p /content/novelai/train_images/des/
## 设置密码
若不设置则随机生成一个
每次更改需要运行一下下面单元格代码
import random
keys="abcdefghigklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
#@markdown # 服务账号
user="Iris" #@param {type:"string"}
if len(user)==0:
user="".join([random.choice(keys) for i in range(random.randint(4,6))])
#@markdown # 服务密码
pwd="212121" #@param {type:"string"}
if len(pwd)==0:
pwd="".join([random.choice(keys) for i in range(random.randint(6,8))])
#3.运行NovelAI
* 运行成功时会显示两个蓝色的地址
* 点击**类似** ~https://xxxx.gradio.app/~ 的网址即可外部访问,支持分享给别人用
* 有时候运行成功但是没给出链接可能是因为太多人在生成链接了,**重新运行一下**这一步试试
* 有时候生成图片进度条都没动就直接出图而且界面一直没有重新出来gen也是因为太多人用,刷新一下就好
**可主动停止和多次运行下列单元格代码**控制NovelAI运行状态
%cd /content/novelai
print("#####################################################################################################################")
print(f"* 账号密码分别是{user}和{pwd}")
print("#######################################")
print("!!!运行成功时会显示两个蓝色的地址,点击下方类似 https://xxxx.gradio.app/ 的网址即可外部访问,支持分享给别人用")
print("!!!注意看上面文本提示")
print("#####################################################################################################################")
!python launch.py --share --gradio-auth {user}:{pwd} --deepdanbooru
|
Nobody138/xlm-roberta-base-finetuned-panx-de
|
Nobody138
| 2022-10-22T06:22:22Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-12T01:12:00Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: train
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
api19750904/newspainclass
|
api19750904
| 2022-10-22T06:01:53Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-22T05:52:12Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 14000 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 14000,
"warmup_steps": 1400,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
sayakpaul/demo
|
sayakpaul
| 2022-10-22T04:07:20Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"doi:10.57967/hf/0070",
"region:us"
] | null | 2022-10-22T04:07:13Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
rahul77/t5-small-finetuned-thehindu1
|
rahul77
| 2022-10-22T02:54:27Z | 9 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-22T02:37:33Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: rahul77/t5-small-finetuned-thehindu1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# rahul77/t5-small-finetuned-thehindu1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4672
- Validation Loss: 0.7612
- Train Rouge1: 29.6559
- Train Rouge2: 24.0992
- Train Rougel: 27.7417
- Train Rougelsum: 28.4408
- Train Gen Len: 19.0
- Epoch: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 1.2252 | 0.9927 | 25.8031 | 17.7261 | 23.4483 | 25.0648 | 19.0 | 0 |
| 1.0509 | 0.9137 | 28.0482 | 20.6823 | 25.5396 | 27.0125 | 19.0 | 1 |
| 0.9961 | 0.8638 | 28.2964 | 22.1783 | 26.4157 | 27.4368 | 19.0 | 2 |
| 0.9266 | 0.8321 | 27.7054 | 21.8853 | 26.0306 | 26.9068 | 19.0 | 3 |
| 0.8851 | 0.8117 | 28.3740 | 22.8198 | 26.8479 | 27.5047 | 19.0 | 4 |
| 0.8505 | 0.7975 | 28.7979 | 23.1437 | 27.0745 | 27.7887 | 19.0 | 5 |
| 0.8247 | 0.7890 | 28.9634 | 23.3567 | 27.3117 | 28.0320 | 19.0 | 6 |
| 0.8154 | 0.7827 | 28.8667 | 23.4468 | 27.1404 | 27.8453 | 19.0 | 7 |
| 0.7889 | 0.7813 | 29.0498 | 23.6403 | 27.5662 | 28.1518 | 19.0 | 8 |
| 0.7676 | 0.7774 | 29.1829 | 23.5778 | 27.7014 | 28.3268 | 19.0 | 9 |
| 0.7832 | 0.7714 | 29.1040 | 23.3700 | 27.6605 | 28.2650 | 19.0 | 10 |
| 0.7398 | 0.7676 | 29.1040 | 23.3700 | 27.6605 | 28.2650 | 19.0 | 11 |
| 0.7473 | 0.7644 | 29.4387 | 24.1983 | 27.9842 | 28.5700 | 19.0 | 12 |
| 0.7270 | 0.7628 | 29.3128 | 24.1484 | 27.8565 | 28.4215 | 19.0 | 13 |
| 0.7174 | 0.7615 | 29.3128 | 24.1484 | 27.8565 | 28.4215 | 19.0 | 14 |
| 0.7231 | 0.7577 | 29.3838 | 23.9483 | 27.6550 | 28.3416 | 19.0 | 15 |
| 0.7099 | 0.7558 | 29.4866 | 24.1703 | 27.8649 | 28.4404 | 19.0 | 16 |
| 0.7060 | 0.7548 | 29.4866 | 24.1703 | 27.8649 | 28.4404 | 19.0 | 17 |
| 0.6884 | 0.7539 | 29.4866 | 24.1703 | 27.8649 | 28.4404 | 19.0 | 18 |
| 0.6778 | 0.7546 | 29.4866 | 24.1703 | 27.8649 | 28.4404 | 19.0 | 19 |
| 0.6586 | 0.7519 | 29.4866 | 24.1703 | 27.8649 | 28.4404 | 19.0 | 20 |
| 0.6474 | 0.7521 | 29.4866 | 24.1703 | 27.8649 | 28.4404 | 19.0 | 21 |
| 0.6392 | 0.7527 | 29.4866 | 24.1703 | 27.8649 | 28.4404 | 19.0 | 22 |
| 0.6424 | 0.7537 | 29.4866 | 24.1703 | 27.8649 | 28.4404 | 19.0 | 23 |
| 0.6184 | 0.7536 | 29.4866 | 24.1703 | 27.8649 | 28.4404 | 19.0 | 24 |
| 0.6164 | 0.7520 | 29.4866 | 24.0547 | 27.7388 | 28.3416 | 19.0 | 25 |
| 0.6115 | 0.7502 | 29.4866 | 23.9746 | 27.8232 | 28.4227 | 19.0 | 26 |
| 0.6056 | 0.7498 | 29.4866 | 23.9746 | 27.8232 | 28.4227 | 19.0 | 27 |
| 0.6004 | 0.7488 | 29.4451 | 23.7671 | 27.5435 | 28.2982 | 19.0 | 28 |
| 0.5851 | 0.7478 | 29.4451 | 23.7671 | 27.5435 | 28.2982 | 19.0 | 29 |
| 0.5777 | 0.7496 | 29.4866 | 23.9746 | 27.8232 | 28.4227 | 19.0 | 30 |
| 0.5751 | 0.7486 | 29.4866 | 23.9746 | 27.8232 | 28.4227 | 19.0 | 31 |
| 0.5730 | 0.7485 | 29.4866 | 23.9746 | 27.8232 | 28.4227 | 19.0 | 32 |
| 0.5487 | 0.7499 | 29.4962 | 24.0563 | 27.8422 | 28.4356 | 19.0 | 33 |
| 0.5585 | 0.7517 | 29.4962 | 24.0563 | 27.8422 | 28.4356 | 19.0 | 34 |
| 0.5450 | 0.7538 | 29.4962 | 24.0563 | 27.8422 | 28.4356 | 19.0 | 35 |
| 0.5427 | 0.7509 | 29.4962 | 24.0563 | 27.8422 | 28.4356 | 19.0 | 36 |
| 0.5287 | 0.7500 | 29.4962 | 24.0563 | 27.8422 | 28.4356 | 19.0 | 37 |
| 0.5231 | 0.7486 | 29.4962 | 24.0563 | 27.8422 | 28.4356 | 19.0 | 38 |
| 0.5155 | 0.7523 | 29.4962 | 24.0563 | 27.8422 | 28.4356 | 19.0 | 39 |
| 0.5105 | 0.7550 | 29.4962 | 24.0563 | 27.8422 | 28.4356 | 19.0 | 40 |
| 0.5175 | 0.7557 | 29.6736 | 24.3120 | 28.0332 | 28.5828 | 19.0 | 41 |
| 0.5053 | 0.7560 | 29.6736 | 24.3120 | 28.0332 | 28.5828 | 19.0 | 42 |
| 0.4928 | 0.7548 | 29.6736 | 24.3120 | 28.0332 | 28.5828 | 19.0 | 43 |
| 0.4913 | 0.7568 | 29.6559 | 24.0992 | 27.7417 | 28.4408 | 19.0 | 44 |
| 0.4841 | 0.7574 | 29.6559 | 24.0992 | 27.7417 | 28.4408 | 19.0 | 45 |
| 0.4770 | 0.7583 | 29.6736 | 24.3120 | 28.0332 | 28.5828 | 19.0 | 46 |
| 0.4727 | 0.7581 | 29.6736 | 24.3120 | 28.0332 | 28.5828 | 19.0 | 47 |
| 0.4612 | 0.7623 | 29.6736 | 24.3120 | 28.0332 | 28.5828 | 19.0 | 48 |
| 0.4672 | 0.7612 | 29.6559 | 24.0992 | 27.7417 | 28.4408 | 19.0 | 49 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
paj/ppo-lunar
|
paj
| 2022-10-22T02:52:42Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-22T00:40:57Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.51 +/- 21.34
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sd-concepts-library/dreamy-painting
|
sd-concepts-library
| 2022-10-22T02:48:34Z | 0 | 3 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-22T02:35:47Z |
---
license: mit
---
### Dreamy Painting on Stable Diffusion
This is the `<dreamy-painting>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





Here are images generated in this style:




|
MAJF/bhu
|
MAJF
| 2022-10-22T01:38:07Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-10-22T01:38:07Z |
---
license: bigscience-bloom-rail-1.0
---
|
debbiesoon/distilbart
|
debbiesoon
| 2022-10-22T01:27:28Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:wiki_lingua",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-22T00:30:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: distilbart
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart
This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-3](https://huggingface.co/sshleifer/distilbart-xsum-12-3) on the wiki_lingua dataset.
## Model description
More information needed
## Intended uses & limitations
encoder_max_length = 256
decoder_max_length = 64
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Meow412/finetuning-sentiment-model-3000-samples
|
Meow412
| 2022-10-22T00:57:39Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-22T00:48:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8684210526315789
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3184
- Accuracy: 0.8667
- F1: 0.8684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Shaier/longformer_openbook
|
Shaier
| 2022-10-21T23:14:01Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-10-21T22:31:44Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: longformer_openbook
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer_openbook
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7773
- Accuracy: 0.71
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 25
- total_train_batch_size: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.99 | 49 | 0.8618 | 0.662 |
| No log | 1.99 | 98 | 0.7773 | 0.71 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1
- Datasets 2.5.1
- Tokenizers 0.11.0
|
dslack/t5-flan-small
|
dslack
| 2022-10-21T22:46:33Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-21T22:33:36Z |
T5 FLAN small model from Google t5x release, compatible with hugging face for ease of use.
|
sujit27/q-Taxi-v3
|
sujit27
| 2022-10-21T22:21:03Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-21T22:19:07Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sujit27/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Dickwold/Fv
|
Dickwold
| 2022-10-21T22:01:11Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-10-21T22:01:11Z |
---
license: bigscience-openrail-m
---
|
sd-dreambooth-library/MoonKnightCkpt
|
sd-dreambooth-library
| 2022-10-21T20:00:11Z | 0 | 2 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-21T19:29:02Z |
---
license: creativeml-openrail-m
---
Model trained with the SD 1.5: runwayml/stable-diffusion-v1-5
Dreambooth google colab: https://colab.research.google.com/github/ShivamShrirao/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb
youtube video of the training: https://www.youtube.com/watch?v=uzmJXDSxoRk&ab_channel=InversiaImages
|
SanDiegoDude/DarkCrystalMerged-Skeksis-and-Gelfling-prompts
|
SanDiegoDude
| 2022-10-21T19:41:31Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-20T17:29:34Z |
---
license: mit
---
Hi Guys, this is my first attempt at making something public, so my apologies if this seems completely amateur and unprofessional, because it absolutely is!
So what I've done here is Dreambooth train 2 different models on SD1.4, one on images of Gelflings from the Dark Crystal, and another model on Skeksis. I then merged both models together with equal weights, and this is the result. (training done via NMKD Stable Diffusion on an RTX 3090 for 4000 steps for both models)
class words are "Gelfling" and "Skeksis" - I've found that it tends to really favor the bird beaks for the Skeksis, so if you're finding beaks on everything, de-emphasize by .3 or .4. On the inverse of that, I've found I need to emphasize gelflings to about 1.2 to really get good gelfling examples. Word of warning, it has no clue what is male and what is female for both classes, so don't be upset by cross dressing gelflings!
Here are Skeksis Samples, some with Gelflings as well:









and here are the Gelflings (bonus points if you can figure out the celebrity likenesses!)






Hope you enjoy, I'm impressed with how well this training turned out. I look forward to seeing Gelflings in the wild!
|
exploranium/fox-count
|
exploranium
| 2022-10-21T19:05:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-21T19:05:26Z |
---
license: creativeml-openrail-m
---
|
Icepyck/Vascular1
|
Icepyck
| 2022-10-21T18:12:20Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-10-21T18:12:20Z |
---
license: bigscience-openrail-m
---
|
orlcast/layoutxlm-finetuned-xfund-it
|
orlcast
| 2022-10-21T18:07:01Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:xfun",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-21T16:57:07Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- xfun
model-index:
- name: layoutxlm-finetuned-xfund-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutxlm-finetuned-xfund-it
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the xfun dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.10.0+cu111
- Datasets 2.6.1
- Tokenizers 0.13.1
|
asapcreditrepairboston/Credit-Repair-Boston
|
asapcreditrepairboston
| 2022-10-21T17:56:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-21T17:56:09Z |
ASAP [Credit Repair Boston](https://boston.asapcreditrepairusa.com/) will help you repair your credit scores by removing derogatory items from your accounts. Call or text us today!
|
jayanta/resnet152-FV-finetuned-memes
|
jayanta
| 2022-10-21T17:26:29Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-21T16:56:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: resnet152-FV-finetuned-memes
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7557959814528593
- name: Precision
type: precision
value: 0.7556690736625777
- name: Recall
type: recall
value: 0.7557959814528593
- name: F1
type: f1
value: 0.7545674798253312
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet152-FV-finetuned-memes
This model is a fine-tuned version of [microsoft/resnet-152](https://huggingface.co/microsoft/resnet-152) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6772
- Accuracy: 0.7558
- Precision: 0.7557
- Recall: 0.7558
- F1: 0.7546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.5739 | 0.99 | 20 | 1.5427 | 0.4521 | 0.3131 | 0.4521 | 0.2880 |
| 1.4353 | 1.99 | 40 | 1.3786 | 0.4490 | 0.3850 | 0.4490 | 0.2791 |
| 1.3026 | 2.99 | 60 | 1.2734 | 0.4799 | 0.3073 | 0.4799 | 0.3393 |
| 1.1579 | 3.99 | 80 | 1.1378 | 0.5278 | 0.4300 | 0.5278 | 0.4143 |
| 1.0276 | 4.99 | 100 | 1.0231 | 0.5734 | 0.4497 | 0.5734 | 0.4865 |
| 0.8826 | 5.99 | 120 | 0.9228 | 0.6252 | 0.5983 | 0.6252 | 0.5637 |
| 0.766 | 6.99 | 140 | 0.8441 | 0.6662 | 0.6474 | 0.6662 | 0.6320 |
| 0.6732 | 7.99 | 160 | 0.8009 | 0.6901 | 0.6759 | 0.6901 | 0.6704 |
| 0.5653 | 8.99 | 180 | 0.7535 | 0.7218 | 0.7141 | 0.7218 | 0.7129 |
| 0.4957 | 9.99 | 200 | 0.7317 | 0.7257 | 0.7248 | 0.7257 | 0.7200 |
| 0.4534 | 10.99 | 220 | 0.6808 | 0.7434 | 0.7405 | 0.7434 | 0.7390 |
| 0.3792 | 11.99 | 240 | 0.6949 | 0.7450 | 0.7454 | 0.7450 | 0.7399 |
| 0.3489 | 12.99 | 260 | 0.6746 | 0.7496 | 0.7511 | 0.7496 | 0.7474 |
| 0.3113 | 13.99 | 280 | 0.6637 | 0.7573 | 0.7638 | 0.7573 | 0.7579 |
| 0.2947 | 14.99 | 300 | 0.6451 | 0.7589 | 0.7667 | 0.7589 | 0.7610 |
| 0.2776 | 15.99 | 320 | 0.6754 | 0.7543 | 0.7565 | 0.7543 | 0.7525 |
| 0.2611 | 16.99 | 340 | 0.6808 | 0.7550 | 0.7607 | 0.7550 | 0.7529 |
| 0.2428 | 17.99 | 360 | 0.7005 | 0.7457 | 0.7497 | 0.7457 | 0.7404 |
| 0.2346 | 18.99 | 380 | 0.6597 | 0.7573 | 0.7642 | 0.7573 | 0.7590 |
| 0.2367 | 19.99 | 400 | 0.6772 | 0.7558 | 0.7557 | 0.7558 | 0.7546 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1.dev0
- Tokenizers 0.13.1
|
mayankb96/bert-base-uncased-finetuned-lexglue
|
mayankb96
| 2022-10-21T17:24:15Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:lex_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-21T17:01:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- lex_glue
model-index:
- name: bert-base-uncased-finetuned-lexglue
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-lexglue
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the lex_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7154 | 1.0 | 1250 | 1.1155 |
| 0.9658 | 2.0 | 2500 | 1.0348 |
| 1.0321 | 3.0 | 3750 | 1.0125 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.6.1
- Tokenizers 0.12.1
|
ViktorDo/DistilBERT-WIKI_Epiphyte_Finetuned
|
ViktorDo
| 2022-10-21T16:51:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-21T15:00:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: DistilBERT-WIKI_Epiphyte_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT-WIKI_Epiphyte_Finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0711 | 1.0 | 2094 | 0.0543 |
| 0.0512 | 2.0 | 4188 | 0.0474 |
| 0.027 | 3.0 | 6282 | 0.0506 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
jayanta/convnext-large-224-22k-1k-FV2-finetuned-memes
|
jayanta
| 2022-10-21T16:48:07Z | 40 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-21T16:09:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: convnext-large-224-22k-1k-FV2-finetuned-memes
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.866306027820711
- name: Precision
type: precision
value: 0.8617341777601428
- name: Recall
type: recall
value: 0.866306027820711
- name: F1
type: f1
value: 0.8629450778711495
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-large-224-22k-1k-FV2-finetuned-memes
This model is a fine-tuned version of [facebook/convnext-large-224-22k-1k](https://huggingface.co/facebook/convnext-large-224-22k-1k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4290
- Accuracy: 0.8663
- Precision: 0.8617
- Recall: 0.8663
- F1: 0.8629
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.8992 | 0.99 | 20 | 0.6455 | 0.7658 | 0.7512 | 0.7658 | 0.7534 |
| 0.4245 | 1.99 | 40 | 0.4008 | 0.8539 | 0.8680 | 0.8539 | 0.8541 |
| 0.2054 | 2.99 | 60 | 0.3245 | 0.8694 | 0.8631 | 0.8694 | 0.8650 |
| 0.1102 | 3.99 | 80 | 0.3231 | 0.8671 | 0.8624 | 0.8671 | 0.8645 |
| 0.0765 | 4.99 | 100 | 0.3882 | 0.8563 | 0.8603 | 0.8563 | 0.8556 |
| 0.0642 | 5.99 | 120 | 0.4133 | 0.8601 | 0.8604 | 0.8601 | 0.8598 |
| 0.0574 | 6.99 | 140 | 0.3889 | 0.8694 | 0.8657 | 0.8694 | 0.8667 |
| 0.0526 | 7.99 | 160 | 0.4145 | 0.8655 | 0.8705 | 0.8655 | 0.8670 |
| 0.0468 | 8.99 | 180 | 0.4256 | 0.8679 | 0.8642 | 0.8679 | 0.8650 |
| 0.0472 | 9.99 | 200 | 0.4290 | 0.8663 | 0.8617 | 0.8663 | 0.8629 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1.dev0
- Tokenizers 0.13.1
|
haoanh98/Long_Bartpho_syllable_base
|
haoanh98
| 2022-10-21T16:19:30Z | 9 | 0 |
transformers
|
[
"transformers",
"tf",
"led",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-19T08:37:57Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: Long_Bartpho_syllable_base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Long_Bartpho_syllable_base
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Tokenizers 0.13.1
|
sd-concepts-library/azura-from-vibrant-venture
|
sd-concepts-library
| 2022-10-21T14:50:19Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-21T14:50:13Z |
---
license: mit
---
### azura-from-vibrant-venture on Stable Diffusion
This is the `<azura>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:







|
huggingtweets/iangabchri-nisipisa-tyler02020202
|
huggingtweets
| 2022-10-21T14:48:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-21T14:46:52Z |
---
language: en
thumbnail: http://www.huggingtweets.com/iangabchri-nisipisa-tyler02020202/1666363695853/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1563876002329231363/RPhmnhOa_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1474994961896644608/um4unzmz_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1548021440191926272/FaXKxAO__400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">gab & tyler & nisa, from online</div>
<div style="text-align: center; font-size: 14px;">@iangabchri-nisipisa-tyler02020202</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from gab & tyler & nisa, from online.
| Data | gab | tyler | nisa, from online |
| --- | --- | --- | --- |
| Tweets downloaded | 253 | 2595 | 3221 |
| Retweets | 66 | 102 | 237 |
| Short tweets | 5 | 632 | 342 |
| Tweets kept | 182 | 1861 | 2642 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rlxqnm8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iangabchri-nisipisa-tyler02020202's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/gg2ms4z1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/gg2ms4z1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/iangabchri-nisipisa-tyler02020202')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ctu-aic/xlm-roberta-large-xnli-csfever_nli
|
ctu-aic
| 2022-10-21T14:10:34Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"arxiv:2201.11115",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-21T14:04:49Z |
('---\ndatasets:\n- ctu-aic/csfever_nli\nlanguages:\n- cs\nlicense: cc-by-sa-4.0\ntags:\n- natural-language-inference\n\n---',)
# 🦾 xlm-roberta-large-xnli-csfever_nli
Transformer model for **Natural Language Inference** in ['cs'] languages finetuned on ['ctu-aic/csfever_nli'] datasets.
## 🧰 Usage
### 👾 Using UKPLab `sentence_transformers` `CrossEncoder`
The model was trained using the `CrossEncoder` API and we recommend it for its usage.
```python
from sentence_transformers.cross_encoder import CrossEncoder
model = CrossEncoder('ctu-aic/xlm-roberta-large-xnli-csfever_nli')
scores = model.predict([["My first context.", "My first hypothesis."],
["Second context.", "Hypothesis."]])
```
### 🤗 Using Huggingface `transformers`
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ctu-aic/xlm-roberta-large-xnli-csfever_nli")
tokenizer = AutoTokenizer.from_pretrained("ctu-aic/xlm-roberta-large-xnli-csfever_nli")
```
## 🌳 Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
## 👬 Authors
The model was trained and uploaded by **[ullriher](https://udb.fel.cvut.cz/?uid=ullriher&sn=&givenname=&_cmd=Hledat&_reqn=1&_type=user&setlang=en)** (e-mail: [ullriher@fel.cvut.cz](mailto:ullriher@fel.cvut.cz))
The code was codeveloped by the NLP team at Artificial Intelligence Center of CTU in Prague ([AIC](https://www.aic.fel.cvut.cz/)).
## 🔐 License
[cc-by-sa-4.0](https://choosealicense.com/licenses/cc-by-sa-4.0)
## 💬 Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{DBLP:journals/corr/abs-2201-11115,
author = {Herbert Ullrich and
Jan Drchal and
Martin R{'{y}}par and
Hana Vincourov{'{a}} and
V{'{a}}clav Moravec},
title = {CsFEVER and CTKFacts: Acquiring Czech Data for Fact Verification},
journal = {CoRR},
volume = {abs/2201.11115},
year = {2022},
url = {https://arxiv.org/abs/2201.11115},
eprinttype = {arXiv},
eprint = {2201.11115},
timestamp = {Tue, 01 Feb 2022 14:59:01 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-11115.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
ctu-aic/xlm-roberta-large-xnli-enfever_nli
|
ctu-aic
| 2022-10-21T13:52:57Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"arxiv:2201.11115",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-21T13:47:11Z |
('---\ndatasets:\n- ctu-aic/enfever_nli\nlanguages:\n- cs\nlicense: cc-by-sa-4.0\ntags:\n- natural-language-inference\n\n---',)
# 🦾 xlm-roberta-large-xnli-enfever_nli
Transformer model for **Natural Language Inference** in ['cs'] languages finetuned on ['ctu-aic/enfever_nli'] datasets.
## 🧰 Usage
### 👾 Using UKPLab `sentence_transformers` `CrossEncoder`
The model was trained using the `CrossEncoder` API and we recommend it for its usage.
```python
from sentence_transformers.cross_encoder import CrossEncoder
model = CrossEncoder('ctu-aic/xlm-roberta-large-xnli-enfever_nli')
scores = model.predict([["My first context.", "My first hypothesis."],
["Second context.", "Hypothesis."]])
```
### 🤗 Using Huggingface `transformers`
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ctu-aic/xlm-roberta-large-xnli-enfever_nli")
tokenizer = AutoTokenizer.from_pretrained("ctu-aic/xlm-roberta-large-xnli-enfever_nli")
```
## 🌳 Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
## 👬 Authors
The model was trained and uploaded by **[ullriher](https://udb.fel.cvut.cz/?uid=ullriher&sn=&givenname=&_cmd=Hledat&_reqn=1&_type=user&setlang=en)** (e-mail: [ullriher@fel.cvut.cz](mailto:ullriher@fel.cvut.cz))
The code was codeveloped by the NLP team at Artificial Intelligence Center of CTU in Prague ([AIC](https://www.aic.fel.cvut.cz/)).
## 🔐 License
[cc-by-sa-4.0](https://choosealicense.com/licenses/cc-by-sa-4.0)
## 💬 Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{DBLP:journals/corr/abs-2201-11115,
author = {Herbert Ullrich and
Jan Drchal and
Martin R{'{y}}par and
Hana Vincourov{'{a}} and
V{'{a}}clav Moravec},
title = {CsFEVER and CTKFacts: Acquiring Czech Data for Fact Verification},
journal = {CoRR},
volume = {abs/2201.11115},
year = {2022},
url = {https://arxiv.org/abs/2201.11115},
eprinttype = {arXiv},
eprint = {2201.11115},
timestamp = {Tue, 01 Feb 2022 14:59:01 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-11115.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
manirai91/enlm-r
|
manirai91
| 2022-10-21T13:50:54Z | 73 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-01T07:58:38Z |
---
tags:
- generated_from_trainer
model-index:
- name: enlm-r
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# enlm-r
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 128
- total_train_batch_size: 8192
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 24000
- num_epochs: 81
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.4 | 0.33 | 160 | 10.7903 |
| 6.4 | 0.66 | 320 | 10.1431 |
| 6.4 | 0.99 | 480 | 9.8708 |
| 6.4 | 0.33 | 640 | 9.3884 |
| 6.4 | 0.66 | 800 | 8.7352 |
| 6.4 | 0.99 | 960 | 8.3341 |
| 6.4 | 1.33 | 1120 | 8.0614 |
| 6.4 | 1.66 | 1280 | 7.8582 |
| 4.2719 | 1.99 | 1440 | 7.4879 |
| 3.2 | 3.3 | 1600 | 7.2689 |
| 3.2 | 3.63 | 1760 | 7.1434 |
| 3.2 | 3.96 | 1920 | 7.0576 |
| 3.2 | 4.29 | 2080 | 7.0030 |
| 3.2 | 4.62 | 2240 | 6.9612 |
| 3.2 | 4.95 | 2400 | 6.9394 |
| 3.2 | 5.28 | 2560 | 6.9559 |
| 3.2 | 5.61 | 2720 | 6.8964 |
| 3.2 | 5.94 | 2880 | 6.8939 |
| 3.2 | 6.27 | 3040 | 6.8871 |
| 3.2 | 6.6 | 3200 | 6.8771 |
| 3.2 | 6.93 | 3360 | 6.8617 |
| 3.2 | 7.26 | 3520 | 6.8472 |
| 3.2 | 7.59 | 3680 | 6.8283 |
| 3.2 | 7.92 | 3840 | 6.8082 |
| 3.2 | 8.25 | 4000 | 6.8119 |
| 3.2 | 8.58 | 4160 | 6.7962 |
| 3.2 | 8.91 | 4320 | 6.7751 |
| 3.2 | 9.24 | 4480 | 6.7405 |
| 3.2 | 9.57 | 4640 | 6.7412 |
| 3.2 | 9.9 | 4800 | 6.7279 |
| 3.2 | 10.22 | 4960 | 6.7069 |
| 3.2 | 10.55 | 5120 | 6.6998 |
| 3.2 | 10.88 | 5280 | 6.6875 |
| 3.2 | 11.22 | 5440 | 6.6580 |
| 3.2 | 11.55 | 5600 | 6.6402 |
| 3.2 | 11.88 | 5760 | 6.6281 |
| 3.2 | 12.21 | 5920 | 6.6181 |
| 3.2 | 12.54 | 6080 | 6.5995 |
| 3.2 | 12.87 | 6240 | 6.5970 |
| 3.2 | 13.2 | 6400 | 6.5772 |
| 3.2 | 13.53 | 6560 | 6.5594 |
| 3.2 | 13.85 | 6720 | 6.5400 |
| 3.2 | 14.19 | 6880 | 6.5396 |
| 3.2 | 14.51 | 7040 | 6.5211 |
| 3.2 | 14.84 | 7200 | 6.5140 |
| 3.2 | 15.18 | 7360 | 6.4002 |
| 3.2 | 15.5 | 7520 | 6.3170 |
| 3.2 | 15.83 | 7680 | 6.2621 |
| 3.2 | 16.16 | 7840 | 6.2253 |
| 3.2 | 16.49 | 8000 | 6.1722 |
| 3.2 | 16.82 | 8160 | 6.1106 |
| 3.2 | 17.15 | 8320 | 6.1281 |
| 3.2 | 17.48 | 8480 | 6.0019 |
| 3.2 | 17.81 | 8640 | 5.9069 |
| 3.2 | 18.14 | 8800 | 5.7105 |
| 3.2 | 18.47 | 8960 | 5.2741 |
| 3.2 | 18.8 | 9120 | 5.0369 |
| 5.0352 | 19.13 | 9280 | 4.8148 |
| 4.5102 | 19.26 | 9440 | 4.3175 |
| 4.1247 | 19.59 | 9600 | 3.9518 |
| 3.8443 | 20.12 | 9760 | 3.6712 |
| 3.6334 | 20.45 | 9920 | 3.4654 |
| 3.4698 | 20.78 | 10080 | 3.2994 |
| 3.3267 | 21.11 | 10240 | 3.1638 |
| 3.2173 | 21.44 | 10400 | 3.0672 |
| 3.1255 | 21.77 | 10560 | 2.9687 |
| 3.0344 | 22.1 | 10720 | 2.8865 |
| 2.9645 | 22.43 | 10880 | 2.8104 |
| 2.9046 | 22.76 | 11040 | 2.7497 |
| 2.8707 | 23.09 | 11200 | 2.7040 |
| 2.7903 | 23.42 | 11360 | 2.6416 |
| 2.7339 | 23.75 | 11520 | 2.5891 |
| 2.6894 | 24.08 | 11680 | 2.5370 |
| 2.6461 | 24.41 | 11840 | 2.4960 |
| 2.5976 | 24.74 | 12000 | 2.4508 |
| 2.5592 | 25.07 | 12160 | 2.4194 |
| 2.5305 | 25.4 | 12320 | 2.3790 |
| 2.4993 | 25.73 | 12480 | 2.3509 |
| 2.465 | 26.06 | 12640 | 2.3173 |
| 2.4455 | 26.39 | 12800 | 2.2934 |
| 2.4107 | 26.72 | 12960 | 2.2701 |
| 2.3883 | 27.05 | 13120 | 2.2378 |
| 2.3568 | 27.38 | 13280 | 2.2079 |
| 2.3454 | 27.71 | 13440 | 2.1919 |
| 2.3207 | 28.04 | 13600 | 2.1671 |
| 2.2963 | 28.37 | 13760 | 2.1513 |
| 2.2738 | 28.7 | 13920 | 2.1326 |
| 2.2632 | 29.03 | 14080 | 2.1176 |
| 2.2413 | 29.36 | 14240 | 2.0913 |
| 2.2193 | 29.69 | 14400 | 2.0772 |
| 2.2169 | 30.02 | 14560 | 2.0692 |
| 2.1848 | 30.35 | 14720 | 2.0411 |
| 2.1693 | 30.68 | 14880 | 2.0290 |
| 2.1964 | 31.01 | 15040 | 2.0169 |
| 2.1467 | 31.34 | 15200 | 2.0016 |
| 2.1352 | 31.67 | 15360 | 1.9880 |
| 2.1152 | 32.0 | 15520 | 1.9727 |
| 2.1098 | 32.33 | 15680 | 1.9604 |
| 2.0888 | 32.66 | 15840 | 1.9521 |
| 2.0837 | 32.99 | 16000 | 1.9394 |
| 2.0761 | 33.32 | 16160 | 1.9366 |
| 2.0635 | 33.65 | 16320 | 1.9200 |
| 2.0631 | 33.98 | 16480 | 1.9147 |
| 2.0448 | 34.31 | 16640 | 1.9053 |
| 2.0452 | 34.64 | 16800 | 1.8937 |
| 2.0303 | 34.97 | 16960 | 1.8801 |
| 2.0184 | 35.3 | 17120 | 1.8752 |
| 2.0115 | 35.63 | 17280 | 1.8667 |
| 2.0042 | 35.96 | 17440 | 1.8626 |
| 2.002 | 36.29 | 17600 | 1.8565 |
| 1.9918 | 36.62 | 17760 | 1.8475 |
| 1.9868 | 36.95 | 17920 | 1.8420 |
| 1.9796 | 37.28 | 18080 | 1.8376 |
| 1.976 | 37.61 | 18240 | 1.8318 |
| 1.9647 | 37.94 | 18400 | 1.8225 |
| 1.9561 | 38.27 | 18560 | 1.8202 |
| 1.9544 | 38.6 | 18720 | 1.8084 |
| 1.9454 | 38.93 | 18880 | 1.8057 |
| 1.9333 | 39.26 | 19040 | 1.8030 |
| 1.9411 | 39.59 | 19200 | 1.7966 |
| 1.9289 | 39.92 | 19360 | 1.7865 |
| 1.9261 | 40.25 | 19520 | 1.7815 |
| 1.9207 | 40.58 | 19680 | 1.7881 |
| 1.9164 | 40.91 | 19840 | 1.7747 |
| 1.9152 | 41.24 | 20000 | 1.7786 |
| 1.914 | 41.57 | 20160 | 1.7664 |
| 1.901 | 41.9 | 20320 | 1.7586 |
| 1.8965 | 42.23 | 20480 | 1.7554 |
| 1.8982 | 42.56 | 20640 | 1.7524 |
| 1.8941 | 42.89 | 20800 | 1.7460 |
| 1.8834 | 43.22 | 20960 | 1.7488 |
| 1.8841 | 43.55 | 21120 | 1.7486 |
| 1.8846 | 43.88 | 21280 | 1.7424 |
| 1.8763 | 44.21 | 21440 | 1.7352 |
| 1.8688 | 44.54 | 21600 | 1.7349 |
| 1.8714 | 44.87 | 21760 | 1.7263 |
| 1.8653 | 45.2 | 21920 | 1.7282 |
| 1.8673 | 45.53 | 22080 | 1.7195 |
| 1.8682 | 45.85 | 22240 | 1.7266 |
| 1.8532 | 46.19 | 22400 | 1.7180 |
| 1.8553 | 46.51 | 22560 | 1.7137 |
| 1.8569 | 46.84 | 22720 | 1.7158 |
| 1.8469 | 47.18 | 22880 | 1.7117 |
| 1.845 | 47.5 | 23040 | 1.7031 |
| 1.8475 | 47.83 | 23200 | 1.7089 |
| 1.845 | 48.16 | 23360 | 1.7018 |
| 1.8391 | 48.49 | 23520 | 1.6945 |
| 1.8456 | 48.82 | 23680 | 1.7015 |
| 1.8305 | 49.15 | 23840 | 1.6964 |
| 1.8324 | 49.48 | 24000 | 1.6900 |
| 1.7763 | 49.81 | 24160 | 1.6449 |
| 1.7728 | 50.14 | 24320 | 1.6436 |
| 1.7576 | 50.47 | 24480 | 1.6268 |
| 1.7354 | 50.8 | 24640 | 1.6088 |
| 1.74 | 51.13 | 24800 | 1.6156 |
| 1.7251 | 51.06 | 24960 | 1.6041 |
| 1.719 | 51.39 | 25120 | 1.5938 |
| 1.7257 | 52.12 | 25280 | 1.5983 |
| 1.7184 | 52.45 | 25440 | 1.5919 |
| 1.7093 | 52.78 | 25600 | 1.5848 |
| 1.7114 | 53.11 | 25760 | 1.5922 |
| 1.7076 | 53.44 | 25920 | 1.5843 |
| 1.7 | 53.77 | 26080 | 1.5807 |
| 1.7027 | 54.1 | 26240 | 1.5811 |
| 1.704 | 54.43 | 26400 | 1.5766 |
| 1.6958 | 54.76 | 26560 | 1.5756 |
| 1.6976 | 55.09 | 26720 | 1.5773 |
| 1.6944 | 55.42 | 26880 | 1.5725 |
| 1.6891 | 55.75 | 27040 | 1.5685 |
| 1.6936 | 56.08 | 27200 | 1.5750 |
| 1.6893 | 56.41 | 27360 | 1.5696 |
| 1.6886 | 56.74 | 27520 | 1.5643 |
| 1.6936 | 57.07 | 27680 | 1.5691 |
| 1.6883 | 57.4 | 27840 | 1.5718 |
| 1.6832 | 57.73 | 28000 | 1.5660 |
| 1.9222 | 28.03 | 28160 | 1.7107 |
| 1.7838 | 28.19 | 28320 | 1.6345 |
| 1.7843 | 28.36 | 28480 | 1.6445 |
| 1.7809 | 28.52 | 28640 | 1.6461 |
| 1.783 | 28.69 | 28800 | 1.6505 |
| 1.7869 | 28.85 | 28960 | 1.6364 |
| 1.778 | 29.02 | 29120 | 1.6363 |
| 1.775 | 29.18 | 29280 | 1.6364 |
| 1.7697 | 29.34 | 29440 | 1.6345 |
| 1.7719 | 29.51 | 29600 | 1.6261 |
| 1.7454 | 61.16 | 29760 | 1.6099 |
| 1.741 | 61.49 | 29920 | 1.6006 |
| 1.7314 | 62.02 | 30080 | 1.6041 |
| 1.7314 | 62.35 | 30240 | 1.5914 |
| 1.7246 | 62.68 | 30400 | 1.5917 |
| 1.7642 | 63.01 | 30560 | 1.5923 |
| 1.7221 | 63.34 | 30720 | 1.5857 |
| 1.7185 | 63.67 | 30880 | 1.5836 |
| 1.7022 | 64.0 | 31040 | 1.5836 |
| 1.7107 | 64.33 | 31200 | 1.5739 |
| 1.7082 | 64.66 | 31360 | 1.5724 |
| 1.7055 | 64.99 | 31520 | 1.5734 |
| 1.7019 | 65.32 | 31680 | 1.5707 |
| 1.699 | 65.65 | 31840 | 1.5649 |
| 1.6963 | 65.98 | 32000 | 1.5685 |
| 1.6935 | 66.31 | 32160 | 1.5673 |
| 1.6899 | 66.64 | 32320 | 1.5648 |
| 1.6869 | 66.97 | 32480 | 1.5620 |
| 1.6867 | 67.3 | 32640 | 1.5564 |
| 1.6861 | 67.63 | 32800 | 1.5552 |
| 1.6831 | 67.96 | 32960 | 1.5496 |
| 1.6778 | 68.29 | 33120 | 1.5479 |
| 1.6742 | 68.62 | 33280 | 1.5501 |
| 1.6737 | 68.95 | 33440 | 1.5441 |
| 1.6725 | 69.28 | 33600 | 1.5399 |
| 1.6683 | 69.61 | 33760 | 1.5398 |
| 1.6689 | 69.94 | 33920 | 1.5374 |
| 1.6634 | 70.27 | 34080 | 1.5385 |
| 1.6638 | 70.6 | 34240 | 1.5332 |
| 1.6614 | 70.93 | 34400 | 1.5329 |
| 1.6544 | 71.26 | 34560 | 1.5292 |
| 1.6532 | 71.59 | 34720 | 1.5268 |
| 1.6511 | 71.92 | 34880 | 1.5225 |
| 1.6506 | 72.25 | 35040 | 1.5219 |
| 1.6496 | 72.58 | 35200 | 1.5202 |
| 1.6468 | 72.91 | 35360 | 1.5199 |
| 1.6424 | 73.24 | 35520 | 1.5220 |
| 1.642 | 73.57 | 35680 | 1.5145 |
| 1.6415 | 73.9 | 35840 | 1.5139 |
| 1.6419 | 74.23 | 36000 | 1.5120 |
| 1.633 | 74.56 | 36160 | 1.5113 |
| 1.6354 | 74.89 | 36320 | 1.5139 |
| 1.6312 | 75.22 | 36480 | 1.5068 |
| 1.6298 | 75.55 | 36640 | 1.5056 |
| 1.6268 | 75.88 | 36800 | 1.5000 |
| 1.6277 | 76.21 | 36960 | 1.5033 |
| 1.6198 | 76.54 | 37120 | 1.4988 |
| 1.6246 | 76.87 | 37280 | 1.4978 |
| 1.6184 | 77.2 | 37440 | 1.4966 |
| 1.6187 | 77.53 | 37600 | 1.4954 |
| 1.6192 | 77.85 | 37760 | 1.4951 |
| 1.6134 | 78.19 | 37920 | 1.4936 |
| 1.6176 | 78.51 | 38080 | 1.4908 |
| 1.6103 | 78.84 | 38240 | 1.4904 |
| 1.612 | 79.18 | 38400 | 1.4919 |
| 1.611 | 79.5 | 38560 | 1.4891 |
| 1.6082 | 79.83 | 38720 | 1.4837 |
| 1.6047 | 80.16 | 38880 | 1.4859 |
| 1.6058 | 80.49 | 39040 | 1.4814 |
| 1.602 | 80.82 | 39200 | 1.4837 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DemangeJeremy/4-sentiments-with-flaubert
|
DemangeJeremy
| 2022-10-21T13:46:12Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"flaubert",
"text-classification",
"sentiments",
"french",
"flaubert-large",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language: fr
tags:
- sentiments
- text-classification
- flaubert
- french
- flaubert-large
---
# Modèle de détection de 4 sentiments avec FlauBERT (mixed, negative, objective, positive)
### Comment l'utiliser ?
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
loaded_tokenizer = AutoTokenizer.from_pretrained('flaubert/flaubert_large_cased')
loaded_model = AutoModelForSequenceClassification.from_pretrained("DemangeJeremy/4-sentiments-with-flaubert")
nlp = pipeline('sentiment-analysis', model=loaded_model, tokenizer=loaded_tokenizer)
print(nlp("Je suis plutôt confiant."))
```
```
[{'label': 'OBJECTIVE', 'score': 0.3320835530757904}]
```
## Résultats de l'évaluation du modèle
| Epoch | Validation Loss | Samples Per Second |
|:------:|:--------------:|:------------------:|
| 1 | 2.219246 | 49.476000 |
| 2 | 1.883753 | 47.259000 |
| 3 | 1.747969 | 44.957000 |
| 4 | 1.695606 | 43.872000 |
| 5 | 1.641470 | 45.726000 |
## Citation
Pour toute utilisation de ce modèle, merci d'utiliser cette citation :
> Jérémy Demange, Four sentiments with FlauBERT, (2021), Hugging Face repository, <https://huggingface.co/DemangeJeremy/4-sentiments-with-flaubert>
|
orkg/orkgnlp-predicates-clustering
|
orkg
| 2022-10-21T13:40:57Z | 0 | 0 | null |
[
"onnx",
"license:mit",
"region:us"
] | null | 2022-05-09T08:02:12Z |
---
license: mit
---
This Repository includes the files required to run the `Predicates Clustering` ORKG-NLP service.
Please check [this article](https://orkg-nlp-pypi.readthedocs.io/en/latest/services/services.html) for more details about the service.
The [Scikit-Learn](https://scikit-learn.org/stable/) models are converted using [skl2onnx](https://github.com/onnx/sklearn-onnx) and may not include all original scikit-learn functionalities.
|
ctu-aic/xlm-roberta-large-xnli-ctkfacts_nli
|
ctu-aic
| 2022-10-21T13:39:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"arxiv:2201.11115",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-21T13:32:24Z |
('---\ndatasets:\n- ctu-aic/ctkfacts_nli\nlanguages:\n- cs\nlicense: cc-by-sa-4.0\ntags:\n- natural-language-inference\n\n---',)
# 🦾 xlm-roberta-large-xnli-ctkfacts_nli
Transformer model for **Natural Language Inference** in ['cs'] languages finetuned on ['ctu-aic/ctkfacts_nli'] datasets.
## 🧰 Usage
### 👾 Using UKPLab `sentence_transformers` `CrossEncoder`
The model was trained using the `CrossEncoder` API and we recommend it for its usage.
```python
from sentence_transformers.cross_encoder import CrossEncoder
model = CrossEncoder('ctu-aic/xlm-roberta-large-xnli-ctkfacts_nli')
scores = model.predict([["My first context.", "My first hypothesis."],
["Second context.", "Hypothesis."]])
```
### 🤗 Using Huggingface `transformers`
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ctu-aic/xlm-roberta-large-xnli-ctkfacts_nli")
tokenizer = AutoTokenizer.from_pretrained("ctu-aic/xlm-roberta-large-xnli-ctkfacts_nli")
```
## 🌳 Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
## 👬 Authors
The model was trained and uploaded by **[ullriher](https://udb.fel.cvut.cz/?uid=ullriher&sn=&givenname=&_cmd=Hledat&_reqn=1&_type=user&setlang=en)** (e-mail: [ullriher@fel.cvut.cz](mailto:ullriher@fel.cvut.cz))
The code was codeveloped by the NLP team at Artificial Intelligence Center of CTU in Prague ([AIC](https://www.aic.fel.cvut.cz/)).
## 🔐 License
[cc-by-sa-4.0](https://choosealicense.com/licenses/cc-by-sa-4.0)
## 💬 Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{DBLP:journals/corr/abs-2201-11115,
author = {Herbert Ullrich and
Jan Drchal and
Martin R{'{y}}par and
Hana Vincourov{'{a}} and
V{'{a}}clav Moravec},
title = {CsFEVER and CTKFacts: Acquiring Czech Data for Fact Verification},
journal = {CoRR},
volume = {abs/2201.11115},
year = {2022},
url = {https://arxiv.org/abs/2201.11115},
eprinttype = {arXiv},
eprint = {2201.11115},
timestamp = {Tue, 01 Feb 2022 14:59:01 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-11115.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
ctu-aic/xlm-roberta-large-squad2-ctkfacts_nli
|
ctu-aic
| 2022-10-21T13:32:10Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"arxiv:2201.11115",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-21T13:24:29Z |
('---\ndatasets:\n- ctu-aic/ctkfacts_nli\nlanguages:\n- cs\nlicense: cc-by-sa-4.0\ntags:\n- natural-language-inference\n\n---',)
# 🦾 xlm-roberta-large-squad2-ctkfacts_nli
Transformer model for **Natural Language Inference** in ['cs'] languages finetuned on ['ctu-aic/ctkfacts_nli'] datasets.
## 🧰 Usage
### 👾 Using UKPLab `sentence_transformers` `CrossEncoder`
The model was trained using the `CrossEncoder` API and we recommend it for its usage.
```python
from sentence_transformers.cross_encoder import CrossEncoder
model = CrossEncoder('ctu-aic/xlm-roberta-large-squad2-ctkfacts_nli')
scores = model.predict([["My first context.", "My first hypothesis."],
["Second context.", "Hypothesis."]])
```
### 🤗 Using Huggingface `transformers`
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ctu-aic/xlm-roberta-large-squad2-ctkfacts_nli")
tokenizer = AutoTokenizer.from_pretrained("ctu-aic/xlm-roberta-large-squad2-ctkfacts_nli")
```
## 🌳 Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
## 👬 Authors
The model was trained and uploaded by **[ullriher](https://udb.fel.cvut.cz/?uid=ullriher&sn=&givenname=&_cmd=Hledat&_reqn=1&_type=user&setlang=en)** (e-mail: [ullriher@fel.cvut.cz](mailto:ullriher@fel.cvut.cz))
The code was codeveloped by the NLP team at Artificial Intelligence Center of CTU in Prague ([AIC](https://www.aic.fel.cvut.cz/)).
## 🔐 License
[cc-by-sa-4.0](https://choosealicense.com/licenses/cc-by-sa-4.0)
## 💬 Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{DBLP:journals/corr/abs-2201-11115,
author = {Herbert Ullrich and
Jan Drchal and
Martin R{'{y}}par and
Hana Vincourov{'{a}} and
V{'{a}}clav Moravec},
title = {CsFEVER and CTKFacts: Acquiring Czech Data for Fact Verification},
journal = {CoRR},
volume = {abs/2201.11115},
year = {2022},
url = {https://arxiv.org/abs/2201.11115},
eprinttype = {arXiv},
eprint = {2201.11115},
timestamp = {Tue, 01 Feb 2022 14:59:01 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-11115.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
ViktorDo/DistilBERT-WIKI_Growth_Form_Finetuned
|
ViktorDo
| 2022-10-21T13:25:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-21T12:41:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: DistilBERT-WIKI_Growth_Form_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT-WIKI_Growth_Form_Finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2454 | 1.0 | 2320 | 0.2530 |
| 0.1875 | 2.0 | 4640 | 0.2578 |
| 0.1386 | 3.0 | 6960 | 0.2666 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
GeniusVoice/bertje-visio-retriever
|
GeniusVoice
| 2022-10-21T12:35:40Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-21T12:22:56Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 217 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 21,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
reza-aditya/bert-finetuned-squad
|
reza-aditya
| 2022-10-21T12:22:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-21T09:57:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
cjbarrie/distilbert-base-uncased-finetuned-emotion
|
cjbarrie
| 2022-10-21T11:01:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-20T16:28:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.