modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 00:36:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 00:36:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
BlinkDL/rwkv-4-pile-14b
|
BlinkDL
| 2023-06-15T21:55:03Z | 0 | 173 | null |
[
"pytorch",
"text-generation",
"causal-lm",
"rwkv",
"en",
"dataset:the_pile",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2022-10-20T11:47:59Z |
---
language:
- en
tags:
- pytorch
- text-generation
- causal-lm
- rwkv
license: apache-2.0
datasets:
- the_pile
---
# RWKV-4 14B
[UPDATE: Try RWKV-4-World (https://huggingface.co/BlinkDL/rwkv-4-world) for generation & chat & code in 100+ world languages, with great English zero-shot & in-context learning ability too.]
## Model Description
RWKV-4 14B is a L40-D5120 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
args.n_layer = 40
args.n_embd = 5120
Use https://github.com/BlinkDL/ChatRWKV to run it.
RWKV-4-Pile-14B-2023xxxx-ctx8192-testxxx.pth : Fine-tuned to ctx_len 8192.
* The best general model.
################################
"Raven": RWKV alpaca+vicuna-style model: https://huggingface.co/BlinkDL/rwkv-4-raven (highly recommended)
It is a strong chat model too. You can use +i for "Alpaca Instruct" in latest ChatRWKV v2. Examples:
```
+i Explain the following metaphor: "Life is like cats".
+i write a python function to read data from an excel file.
```
################################
RWKV-4-Pile-14B-20230213-8019.pth : Trained on the Pile for 331B tokens
* Pile loss 1.7579 (ctx_len 1024)
* LAMBADA ppl 3.81, acc 71.05%
* PIQA acc 77.42%
* SC2016 acc 75.57%
* Hellaswag acc_norm 70.24%
* WinoGrande acc 62.98%
|
jetro30087/vicuna-Wizard-7B-Uncensored-android-q4f16_0
|
jetro30087
| 2023-06-15T21:54:17Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-06-15T20:54:31Z |
Model Card for vicuna-Wizard-7B-Uncensored-android-q4f16_0
Model Description
This Language Model (vicuna-Wizard-7B-Uncensored-android-q4f16_0) is based on Facebook's "Llama" 7B parameter model, trained on the Wizard-Vicuna uncensored dataset under a non-commercial license. It was specifically developed and formatted for use within the MLC-LLM project, which you can find more details about at MLC-LLM project URL.
The model is designed for research and general text generation purposes. Thanks to MLC-LLM's Vulkan compatibility, the model is capable of working on both Nvidia and AMD graphics cards.
Model Usage
The vicuna-Wizard-7B-Uncensored-q3f16_0 model can generate human-like text that's useful for a variety of purposes, including but not limited to research, chatbots, writing aids, and more. You can use the model through MLC-LLM chat by copying it to the mlc-chat/dist folder of a compile MLC-Chat client.
Limitations and Bias
Although the model is capable of generating high-quality text, it is important to note that it is not perfect. Here are some potential limitations and biases:
Output quality: Although trained on a large dataset, the model may occasionally produce text that is nonsensical or does not align with the input prompt.
Biases in the data: The model has been trained on the Wizard-Vicuna uncensored dataset, and as such, it may have inherited biases present in this data. Despite our best efforts to minimize this, it may reflect biases in terms of gender, race, age, or other aspects.
Safety and content: The uncensored nature of the training dataset means that the model could potentially produce text that some people find offensive, inappropriate, or politically biased. We recommend using this model with care, especially in environments with young users or those who might be affected by such content.
Incorrect information: The model generates text based on patterns it learned during training and does not have access to real-world knowledge or updates beyond its training cut-off. As a result, the information it provides should always be verified for accuracy.
Ethical Considerations and Safety
While using this model, consider the following:
Always verify the information provided by the model with reliable external sources before using it to make decisions or for factual reference.
Monitor the output of the model for any potentially inappropriate or harmful content, especially if it is being used in a public or sensitive setting.
Keep in mind the potential biases inherited from the training data and account for these when interpreting the output.
Disclaimer
This model is provided as-is, and the developers make no warranties regarding its performance, appropriateness, or accuracy. Use it at your own risk.
license: othertions](https://mlc.ai/mlc-llm/docs/tutorials/runtime/cpp.html) for details.
|
raghvendramall/esm2_t33_650M_UR50D-crystallization-finetuned-localization
|
raghvendramall
| 2023-06-15T21:49:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"esm",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T16:05:55Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: esm2_t33_650M_UR50D-crystallization-finetuned-localization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t33_650M_UR50D-crystallization-finetuned-localization
This model is a fine-tuned version of [facebook/esm2_t33_650M_UR50D](https://huggingface.co/facebook/esm2_t33_650M_UR50D) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3861
- F1: 0.6470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.422 | 1.0 | 2129 | 0.4366 | 0.6756 |
| 0.2425 | 2.0 | 4258 | 0.6942 | 0.6487 |
| 0.0993 | 3.0 | 6387 | 1.0293 | 0.6518 |
| 0.0535 | 4.0 | 8516 | 1.1326 | 0.6286 |
| 0.0422 | 5.0 | 10645 | 1.1957 | 0.6240 |
| 0.0268 | 6.0 | 12774 | 1.1728 | 0.6468 |
| 0.004 | 7.0 | 14903 | 1.3099 | 0.6563 |
| 0.0001 | 8.0 | 17032 | 1.3316 | 0.6489 |
| 0.0035 | 9.0 | 19161 | 1.3720 | 0.6484 |
| 0.0019 | 10.0 | 21290 | 1.3861 | 0.6470 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
GyanShashwat/distilbert-base-uncased-finetuned-squad-with-customised-input
|
GyanShashwat
| 2023-06-15T21:44:45Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-15T19:49:09Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: GyanShashwat/distilbert-base-uncased-finetuned-squad-with-customised-input
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# GyanShashwat/distilbert-base-uncased-finetuned-squad-with-customised-input
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9722
- Train End Logits Accuracy: 0.7309
- Train Start Logits Accuracy: 0.6905
- Validation Loss: 1.1232
- Validation End Logits Accuracy: 0.6943
- Validation Start Logits Accuracy: 0.6607
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11066, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.5056 | 0.6077 | 0.5696 | 1.1629 | 0.6844 | 0.6471 | 0 |
| 0.9722 | 0.7309 | 0.6905 | 1.1232 | 0.6943 | 0.6607 | 1 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/hBERTv1_new_pretrain_48_KD_w_init_mrpc
|
gokuls
| 2023-06-15T21:42:30Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T21:34:41Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv1_new_pretrain_48_KD_w_init_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7156862745098039
- name: F1
type: f1
value: 0.8104575163398692
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_KD_w_init_mrpc
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48_KD_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48_KD_wt_init) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5878
- Accuracy: 0.7157
- F1: 0.8105
- Combined Score: 0.7631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6514 | 1.0 | 29 | 0.6205 | 0.6887 | 0.8146 | 0.7517 |
| 0.619 | 2.0 | 58 | 0.6165 | 0.6618 | 0.7366 | 0.6992 |
| 0.6208 | 3.0 | 87 | 0.5878 | 0.7157 | 0.8105 | 0.7631 |
| 0.578 | 4.0 | 116 | 0.5952 | 0.7132 | 0.7986 | 0.7559 |
| 0.5612 | 5.0 | 145 | 0.5910 | 0.6936 | 0.7899 | 0.7418 |
| 0.4844 | 6.0 | 174 | 0.6261 | 0.6520 | 0.7290 | 0.6905 |
| 0.4281 | 7.0 | 203 | 0.6146 | 0.7010 | 0.7932 | 0.7471 |
| 0.3919 | 8.0 | 232 | 0.7273 | 0.6838 | 0.7795 | 0.7317 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_pretrain_48_KD_w_init_mrpc
|
gokuls
| 2023-06-15T21:39:57Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T21:33:33Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv2_new_pretrain_48_KD_w_init_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6838235294117647
- name: F1
type: f1
value: 0.8122270742358079
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_KD_w_init_mrpc
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48_KD_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48_KD_wt_init) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6240
- Accuracy: 0.6838
- F1: 0.8122
- Combined Score: 0.7480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6725 | 1.0 | 29 | 0.6240 | 0.6838 | 0.8122 | 0.7480 |
| 0.6382 | 2.0 | 58 | 0.6274 | 0.6838 | 0.8122 | 0.7480 |
| 0.6384 | 3.0 | 87 | 0.6279 | 0.6838 | 0.8122 | 0.7480 |
| 0.6437 | 4.0 | 116 | 0.6346 | 0.6838 | 0.8122 | 0.7480 |
| 0.6386 | 5.0 | 145 | 0.6242 | 0.6838 | 0.8122 | 0.7480 |
| 0.6364 | 6.0 | 174 | 0.6273 | 0.6838 | 0.8122 | 0.7480 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/hBERTv1_new_pretrain_48_KD_w_init_cola
|
gokuls
| 2023-06-15T21:34:19Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T21:23:11Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_KD_w_init_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_KD_w_init_cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48_KD_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48_KD_wt_init) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6182
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6338 | 1.0 | 67 | 0.6182 | 0.0 | 0.6913 |
| 0.6194 | 2.0 | 134 | 0.6405 | 0.0 | 0.6913 |
| 0.6131 | 3.0 | 201 | 0.6188 | 0.0 | 0.6913 |
| 0.6128 | 4.0 | 268 | 0.6199 | 0.0 | 0.6913 |
| 0.6281 | 5.0 | 335 | 0.6197 | 0.0 | 0.6913 |
| 0.6146 | 6.0 | 402 | 0.6196 | 0.0 | 0.6913 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/sa_BERT_48_mrpc
|
gokuls
| 2023-06-15T21:22:25Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T21:15:59Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: sa_BERT_48_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6519607843137255
- name: F1
type: f1
value: 0.726923076923077
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_48_mrpc
This model is a fine-tuned version of [gokuls/bert_base_48](https://huggingface.co/gokuls/bert_base_48) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6401
- Accuracy: 0.6520
- F1: 0.7269
- Combined Score: 0.6894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6588 | 1.0 | 39 | 0.6401 | 0.6520 | 0.7269 | 0.6894 |
| 0.5982 | 2.0 | 78 | 0.6441 | 0.6863 | 0.7801 | 0.7332 |
| 0.4614 | 3.0 | 117 | 0.6615 | 0.6740 | 0.7787 | 0.7264 |
| 0.3148 | 4.0 | 156 | 0.7447 | 0.6765 | 0.7770 | 0.7267 |
| 0.226 | 5.0 | 195 | 0.9718 | 0.6054 | 0.6957 | 0.6505 |
| 0.1566 | 6.0 | 234 | 1.2879 | 0.5564 | 0.6268 | 0.5916 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_pretrain_48_KD_w_init_sst2
|
gokuls
| 2023-06-15T21:20:01Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T20:31:47Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_pretrain_48_KD_w_init_sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8394495412844036
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_KD_w_init_sst2
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48_KD_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48_KD_wt_init) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4188
- Accuracy: 0.8394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3594 | 1.0 | 527 | 0.4188 | 0.8394 |
| 0.2344 | 2.0 | 1054 | 0.5086 | 0.8337 |
| 0.2012 | 3.0 | 1581 | 0.5127 | 0.8177 |
| 0.1723 | 4.0 | 2108 | 0.4814 | 0.8200 |
| 0.1425 | 5.0 | 2635 | 0.4872 | 0.8314 |
| 0.12 | 6.0 | 3162 | 0.5835 | 0.8222 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/add_BERT_48_mrpc
|
gokuls
| 2023-06-15T21:17:47Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T21:11:12Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: add_BERT_48_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6470588235294118
- name: F1
type: f1
value: 0.735294117647059
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# add_BERT_48_mrpc
This model is a fine-tuned version of [gokuls/add_bert_12_layer_model_complete_training_new_48](https://huggingface.co/gokuls/add_bert_12_layer_model_complete_training_new_48) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5979
- Accuracy: 0.6471
- F1: 0.7353
- Combined Score: 0.6912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6617 | 1.0 | 29 | 0.6153 | 0.6838 | 0.7975 | 0.7407 |
| 0.628 | 2.0 | 58 | 0.5979 | 0.6471 | 0.7353 | 0.6912 |
| 0.5741 | 3.0 | 87 | 0.6442 | 0.6985 | 0.8189 | 0.7587 |
| 0.5094 | 4.0 | 116 | 0.6365 | 0.6912 | 0.7850 | 0.7381 |
| 0.4123 | 5.0 | 145 | 0.7135 | 0.6740 | 0.7577 | 0.7159 |
| 0.2939 | 6.0 | 174 | 0.8433 | 0.6740 | 0.7734 | 0.7237 |
| 0.2194 | 7.0 | 203 | 1.1034 | 0.6471 | 0.7429 | 0.6950 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/add_BERT_24_mrpc
|
gokuls
| 2023-06-15T21:16:44Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T21:10:59Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: add_BERT_24_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7009803921568627
- name: F1
type: f1
value: 0.8134556574923548
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# add_BERT_24_mrpc
This model is a fine-tuned version of [gokuls/add_bert_12_layer_model_complete_training_new](https://huggingface.co/gokuls/add_bert_12_layer_model_complete_training_new) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5847
- Accuracy: 0.7010
- F1: 0.8135
- Combined Score: 0.7572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6554 | 1.0 | 29 | 0.5847 | 0.7010 | 0.8135 | 0.7572 |
| 0.6027 | 2.0 | 58 | 0.5925 | 0.6985 | 0.8150 | 0.7568 |
| 0.5423 | 3.0 | 87 | 0.6010 | 0.6887 | 0.8049 | 0.7468 |
| 0.4401 | 4.0 | 116 | 0.6617 | 0.6961 | 0.8050 | 0.7506 |
| 0.2731 | 5.0 | 145 | 0.9531 | 0.6348 | 0.7151 | 0.6750 |
| 0.16 | 6.0 | 174 | 1.0283 | 0.6985 | 0.8045 | 0.7515 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/sa_BERT_24_mrpc
|
gokuls
| 2023-06-15T21:15:29Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T21:09:41Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: sa_BERT_24_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7083333333333334
- name: F1
type: f1
value: 0.8199697428139183
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_24_mrpc
This model is a fine-tuned version of [gokuls/bert_base_24](https://huggingface.co/gokuls/bert_base_24) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6042
- Accuracy: 0.7083
- F1: 0.8200
- Combined Score: 0.7642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6437 | 1.0 | 39 | 0.6042 | 0.7083 | 0.8200 | 0.7642 |
| 0.5784 | 2.0 | 78 | 0.6224 | 0.6544 | 0.7403 | 0.6974 |
| 0.4657 | 3.0 | 117 | 0.7196 | 0.6740 | 0.7816 | 0.7278 |
| 0.3555 | 4.0 | 156 | 0.8929 | 0.6348 | 0.7418 | 0.6883 |
| 0.2516 | 5.0 | 195 | 1.0482 | 0.6078 | 0.6992 | 0.6535 |
| 0.1654 | 6.0 | 234 | 1.3865 | 0.5515 | 0.6131 | 0.5823 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
crlandsc/tiny-audio-diffusion-kicks
|
crlandsc
| 2023-06-15T21:13:22Z | 3 | 1 | null |
[
"audio",
"diffusion",
"waveform diffusion",
"audio diffusion",
"unet",
"region:us"
] | null | 2023-06-07T16:31:09Z |
---
tags:
- audio
- diffusion
- waveform diffusion
- audio diffusion
- unet
---
# Model Card for tiny-audio-diffusion-kicks
Kick drum model for tiny-audio-diffusion. Use with [tiny-audio-diffusion](https://github.com/crlandsc/tiny-audio-diffusion) repo to generate kick drum samples.
|
radyad/valrad_qa_model
|
radyad
| 2023-06-15T21:10:47Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:mlqa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-15T20:46:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mlqa
model-index:
- name: valrad_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# valrad_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the mlqa dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 355 | 2.3752 |
| 3.1802 | 2.0 | 710 | 1.8748 |
| 1.6816 | 3.0 | 1065 | 1.8117 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/sa_BERT_24_cola
|
gokuls
| 2023-06-15T21:09:25Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T20:59:44Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: sa_BERT_24_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_24_cola
This model is a fine-tuned version of [gokuls/bert_base_24](https://huggingface.co/gokuls/bert_base_24) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6120
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6138 | 1.0 | 90 | 0.6120 | 0.0 | 0.6913 |
| 0.5898 | 2.0 | 180 | 0.6242 | 0.0656 | 0.6932 |
| 0.5491 | 3.0 | 270 | 0.6798 | 0.0733 | 0.6405 |
| 0.5027 | 4.0 | 360 | 0.6873 | 0.0667 | 0.6328 |
| 0.4549 | 5.0 | 450 | 0.7841 | 0.1025 | 0.6299 |
| 0.4177 | 6.0 | 540 | 0.8221 | 0.0827 | 0.5849 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
VinayNR/stats-nerd
|
VinayNR
| 2023-06-15T20:48:30Z | 1 | 1 |
flair
|
[
"flair",
"statistics",
"token-classification",
"en",
"dataset:conll2003",
"region:us"
] |
token-classification
| 2023-04-20T17:36:47Z |
---
language:
- en
library_name: flair
pipeline_tag: token-classification
tags:
- statistics
datasets:
- conll2003
---
## Overview
This model is used to identify statistical named entities in large text. Statistical Named Entities are entities that indicate the presence of a statistical claim (such as a hypothesis of an experiment) along with the type of test and the confidence value.
Use this model in your repo to categorize a text document to find claims, test statistics and probability scores. The model uses Flair NLP from ground-up to develop a Stats NER for researchers.
## Usage
from flair.models import SequenceTagger
tagger = SequenceTagger.load("VinayNR/stats-ner")
sentence = Sentence(<your_string>, use_tokenizer=True)
tagger.predict(sentence)
|
gokuls/bert_base_96
|
gokuls
| 2023-06-15T20:41:03Z | 141 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-13T18:06:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_base_96
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_96
This model is a fine-tuned version of [gokuls/bert_base_48](https://huggingface.co/gokuls/bert_base_48) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6333
- Accuracy: 0.5281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 5.6041 | 0.08 | 10000 | 5.5567 | 0.1751 |
| 5.4727 | 0.16 | 20000 | 5.3950 | 0.1953 |
| 5.3385 | 0.25 | 30000 | 5.2277 | 0.2151 |
| 5.2033 | 0.33 | 40000 | 5.0607 | 0.2335 |
| 4.7807 | 0.41 | 50000 | 4.5611 | 0.2910 |
| 4.1994 | 0.49 | 60000 | 4.0039 | 0.3520 |
| 3.8039 | 0.57 | 70000 | 3.6509 | 0.3906 |
| 3.5516 | 0.66 | 80000 | 3.3794 | 0.4263 |
| 3.3199 | 0.74 | 90000 | 3.1446 | 0.4607 |
| 3.1682 | 0.82 | 100000 | 3.0053 | 0.4795 |
| 3.0597 | 0.9 | 110000 | 2.9135 | 0.4919 |
| 2.9814 | 0.98 | 120000 | 2.8331 | 0.5018 |
| 2.907 | 1.07 | 130000 | 2.7724 | 0.5100 |
| 2.8532 | 1.15 | 140000 | 2.7200 | 0.5170 |
| 2.8044 | 1.23 | 150000 | 2.6759 | 0.5227 |
| 2.7694 | 1.31 | 160000 | 2.6333 | 0.5281 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hangeol/32
|
hangeol
| 2023-06-15T20:32:53Z | 30 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-15T19:44:37Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - hangeol/32
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
ontel/marfamoelalora
|
ontel
| 2023-06-15T20:24:02Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T20:22:37Z |
---
license: creativeml-openrail-m
---
|
jvelcin/distilbert-base-uncased-finetuned-netflix
|
jvelcin
| 2023-06-15T20:20:35Z | 86 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-15T20:17:10Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-base-uncased-finetuned-netflix
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-netflix
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.9835
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -708, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 2.9835 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Olegiy/ppo-Huggy
|
Olegiy
| 2023-06-15T20:07:58Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-15T20:07:55Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Olegiy/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
davidmunechika/coreml-openjourney-v4
|
davidmunechika
| 2023-06-15T20:05:30Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T16:38:58Z |
---
license: creativeml-openrail-m
---
|
davidmunechika/coreml-dreamlike-diffusion-1.0
|
davidmunechika
| 2023-06-15T20:02:25Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-14T22:31:51Z |
---
license: creativeml-openrail-m
---
|
gfalcao/ldsc2-0t7
|
gfalcao
| 2023-06-15T19:42:12Z | 37 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-15T19:30:34Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ldsc2.0T7 Dreambooth model trained by gfalcao with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
hangeol/4
|
hangeol
| 2023-06-15T19:33:33Z | 7 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-14T19:59:17Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - hangeol/4
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
gokuls/hBERTv2_new_pretrain_48_emb_com_wnli
|
gokuls
| 2023-06-15T19:22:10Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T19:16:01Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_pretrain_48_emb_com_wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_emb_com_wnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6868
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9415 | 1.0 | 5 | 0.7306 | 0.4366 |
| 0.7146 | 2.0 | 10 | 0.7870 | 0.4366 |
| 0.7207 | 3.0 | 15 | 0.7136 | 0.4225 |
| 0.6988 | 4.0 | 20 | 0.7277 | 0.4366 |
| 0.7058 | 5.0 | 25 | 0.7434 | 0.4366 |
| 0.7171 | 6.0 | 30 | 0.6963 | 0.4366 |
| 0.7007 | 7.0 | 35 | 0.6897 | 0.5634 |
| 0.7085 | 8.0 | 40 | 0.6900 | 0.5634 |
| 0.7282 | 9.0 | 45 | 0.6929 | 0.5634 |
| 0.695 | 10.0 | 50 | 0.6970 | 0.4366 |
| 0.6939 | 11.0 | 55 | 0.6868 | 0.5634 |
| 0.6955 | 12.0 | 60 | 0.6904 | 0.5634 |
| 0.6934 | 13.0 | 65 | 0.7015 | 0.4366 |
| 0.6974 | 14.0 | 70 | 0.6964 | 0.4366 |
| 0.695 | 15.0 | 75 | 0.6904 | 0.5634 |
| 0.7003 | 16.0 | 80 | 0.6981 | 0.4366 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_pretrain_48_emb_com_stsb
|
gokuls
| 2023-06-15T19:15:45Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T18:55:14Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: hBERTv2_new_pretrain_48_emb_com_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.30729552140330846
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_emb_com_stsb
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0889
- Pearson: 0.3123
- Spearmanr: 0.3073
- Combined Score: 0.3098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.398 | 1.0 | 45 | 3.0621 | 0.0972 | 0.1007 | 0.0990 |
| 2.0392 | 2.0 | 90 | 2.3674 | 0.1058 | 0.1011 | 0.1034 |
| 1.967 | 3.0 | 135 | 2.2296 | 0.1449 | 0.1432 | 0.1441 |
| 1.8176 | 4.0 | 180 | 2.6036 | 0.2055 | 0.2169 | 0.2112 |
| 1.6744 | 5.0 | 225 | 2.2119 | 0.2516 | 0.2534 | 0.2525 |
| 1.4727 | 6.0 | 270 | 2.0889 | 0.3123 | 0.3073 | 0.3098 |
| 1.1852 | 7.0 | 315 | 2.6372 | 0.3609 | 0.3543 | 0.3576 |
| 0.9895 | 8.0 | 360 | 2.5881 | 0.3312 | 0.3322 | 0.3317 |
| 0.8254 | 9.0 | 405 | 2.1746 | 0.3991 | 0.3974 | 0.3983 |
| 0.6759 | 10.0 | 450 | 2.7671 | 0.3693 | 0.3663 | 0.3678 |
| 0.558 | 11.0 | 495 | 2.5954 | 0.3967 | 0.3942 | 0.3955 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DesiAEye/Madhubala
|
DesiAEye
| 2023-06-15T19:07:58Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T19:03:16Z |
---
license: creativeml-openrail-m
---
Support on Patreon: https://www.patreon.com/DesiAEye
Join Discord: https://discord.gg/TGWvDGVt
Introducing Madhubala, a remarkable LoRA model trained on the face of the iconic Indian actress, Madhubala. This extraordinary model is designed to generate stunning photorealistic and semirealistic images of the legendary celebrity. With the trigger word "Madhubala woman" witness the artistry of this AI-powered creation.
Celebrate the beauty and charisma of Madhubala, the epitome of Indian cinema, through the intricate details and lifelike expressions captured by this exceptional model. Whether you're a fan of classic Indian cinema or appreciate the elegance of a talented actress, Madhubala will captivate your imagination.
Embrace the essence of this talented Indian woman and indulge in the artistry of Madhubala. Explore the magic of photorealism and unlock a world of creativity and inspiration with this extraordinary LoRA model.
|
asapp/sew-d-tiny-100k-ft-ls100h
|
asapp
| 2023-06-15T19:07:05Z | 98,517 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"sew-d",
"automatic-speech-recognition",
"audio",
"speech",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- audio
- speech
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: sew-d-tiny-100k-ft-ls100h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 10.47
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 22.73
---
# SEW-D-tiny
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, SEWDForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load the model and preprocessor
processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
# load the dummy dataset with speech samples
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# preprocess
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **asapp/sew-d-tiny-100k-ft-ls100h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import SEWDForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
def map_to_pred(batch):
input_values = processor(batch["audio"][0]["array"], sampling_rate=16000,
return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
| --- | --- |
| 10.47 | 22.73 |
|
gokuls/hBERTv2_new_pretrain_48_emb_com_rte
|
gokuls
| 2023-06-15T18:54:55Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T18:48:00Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_pretrain_48_emb_com_rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5270758122743683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_emb_com_rte
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6929
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7486 | 1.0 | 20 | 0.6929 | 0.5271 |
| 0.71 | 2.0 | 40 | 0.6940 | 0.4765 |
| 0.7079 | 3.0 | 60 | 0.7058 | 0.4765 |
| 0.6988 | 4.0 | 80 | 0.7413 | 0.5307 |
| 0.68 | 5.0 | 100 | 0.7054 | 0.5054 |
| 0.6481 | 6.0 | 120 | 0.7751 | 0.5090 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lrthomps/poca-SoccerTwos
|
lrthomps
| 2023-06-15T18:54:20Z | 18 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-15T18:53:59Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: lrthomps/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bsmock/tatr-pubtables1m-v1.0
|
bsmock
| 2023-06-15T18:44:41Z | 0 | 12 | null |
[
"table detection",
"table structure recognition",
"table extraction",
"dataset:bsmock/pubtables-1m",
"license:mit",
"region:us"
] | null | 2023-06-02T16:09:54Z |
---
license: mit
datasets:
- bsmock/pubtables-1m
tags:
- table detection
- table structure recognition
- table extraction
---
# Model Card for Model ID
This repo contains the models for:
1) Table detection,
2) Table structure recognition,
trained on the PubTables-1M dataset, using the training details in the paper: ["PubTables-1M: Towards comprehensive table extraction from unstructured documents"](https://openaccess.thecvf.com/content/CVPR2022/html/Smock_PubTables-1M_Towards_Comprehensive_Table_Extraction_From_Unstructured_Documents_CVPR_2022_paper.html)
## Model Details
### Model Description
- **Developed by:** Brandon Smock and Rohith Pesala, while at Microsoft
- **License:** MIT
- **Finetuned from model:** DETR ResNet-18
### Model Sources
Please see the following for more details:
- **Repository:** ["https://github.com/microsoft/table-transformer"](https://github.com/microsoft/table-transformer)
- **Paper:** ["PubTables-1M: Towards comprehensive table extraction from unstructured documents"](https://openaccess.thecvf.com/content/CVPR2022/html/Smock_PubTables-1M_Towards_Comprehensive_Table_Extraction_From_Unstructured_Documents_CVPR_2022_paper.html)
|
hangeol/5
|
hangeol
| 2023-06-15T18:31:13Z | 8 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-14T19:09:01Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - hangeol/5
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
gfalcao/ldsct7
|
gfalcao
| 2023-06-15T18:24:25Z | 30 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-15T18:12:50Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ldscT7 Dreambooth model trained by gfalcao with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
gokuls/add_bert_12_layer_model_complete_training_new_96
|
gokuls
| 2023-06-15T18:23:31Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-13T17:57:05Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: add_bert_12_layer_model_complete_training_new_96
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# add_bert_12_layer_model_complete_training_new_96
This model is a fine-tuned version of [gokuls/add_bert_12_layer_model_complete_training_new_48](https://huggingface.co/gokuls/add_bert_12_layer_model_complete_training_new_48) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4112
- Accuracy: 0.1893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 5.8144 | 0.08 | 10000 | 5.7474 | 0.1593 |
| 5.7889 | 0.16 | 20000 | 5.7204 | 0.1604 |
| 5.6347 | 0.25 | 30000 | 5.6966 | 0.1623 |
| 5.7138 | 0.33 | 40000 | 5.6725 | 0.1636 |
| 5.6769 | 0.41 | 50000 | 5.6518 | 0.1658 |
| 5.6603 | 0.49 | 60000 | 5.6290 | 0.1686 |
| 5.5852 | 0.57 | 70000 | 5.6076 | 0.1707 |
| 5.6607 | 0.66 | 80000 | 5.5906 | 0.1720 |
| 5.5823 | 0.74 | 90000 | 5.5719 | 0.1739 |
| 5.6124 | 0.82 | 100000 | 5.5543 | 0.1759 |
| 5.6478 | 0.9 | 110000 | 5.5358 | 0.1776 |
| 5.4795 | 0.98 | 120000 | 5.5203 | 0.1787 |
| 5.4557 | 1.07 | 130000 | 5.5028 | 0.1804 |
| 5.5585 | 1.15 | 140000 | 5.4923 | 0.1814 |
| 5.6387 | 1.23 | 150000 | 5.4781 | 0.1825 |
| 5.479 | 1.31 | 160000 | 5.4663 | 0.1833 |
| 5.3951 | 1.39 | 170000 | 5.4512 | 0.1851 |
| 5.5062 | 1.47 | 180000 | 5.4411 | 0.1864 |
| 5.4553 | 1.56 | 190000 | 5.4244 | 0.1881 |
| 5.5461 | 1.64 | 200000 | 5.4112 | 0.1893 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
panpannn/pitri2
|
panpannn
| 2023-06-15T18:15:23Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T18:09:39Z |
---
license: creativeml-openrail-m
---
|
law-ai/CustomInLawBERT
|
law-ai
| 2023-06-15T18:03:18Z | 119 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"legal",
"en",
"arxiv:2209.06049",
"arxiv:2112.14731",
"arxiv:1911.05405",
"arxiv:2105.13562",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-05T06:53:03Z |
---
language: en
pipeline_tag: fill-mask
tags:
- legal
license: mit
---
### InLegalBERT
Model and tokenizer files for the InLegalBERT model from the paper [Pre-training Transformers on Indian Legal Text](https://arxiv.org/abs/2209.06049).
### Training Data
For building the pre-training corpus of Indian legal text, we collected a large corpus of case documents from the Indian Supreme Court and many High Courts of India.
The court cases in our dataset range from 1950 to 2019, and belong to all legal domains, such as Civil, Criminal, Constitutional, and so on.
In total, our dataset contains around 5.4 million Indian legal documents (all in the English language).
The raw text corpus size is around 27 GB.
### Training Setup
This model is initialized with the [LEGAL-BERT-SC model](https://huggingface.co/nlpaueb/legal-bert-base-uncased) from the paper [LEGAL-BERT: The Muppets straight out of Law School](https://aclanthology.org/2020.findings-emnlp.261/). In our work, we refer to this model as LegalBERT, and our re-trained model as InLegalBERT.
We further train this model on our data for 300K steps on the Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) tasks.
### Model Overview
This model uses a custom tokenizer with vocabulary adapted for the Indian Legal domain.
This model has the same configuration as the [bert-base-uncased model](https://huggingface.co/bert-base-uncased):
12 hidden layers, 768 hidden dimensionality, 12 attention heads, ~110M parameters.
### Usage
Using the model to get embeddings/representations for a piece of text
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("law-ai/CustomInLawBERT")
text = "Replace this string with yours"
encoded_input = tokenizer(text, return_tensors="pt")
model = AutoModel.from_pretrained("law-ai/InLegalBERT")
output = model(**encoded_input)
last_hidden_state = output.last_hidden_state
```
### Fine-tuning Results
We have fine-tuned all pre-trained models on 3 legal tasks with Indian datasets:
* Legal Statute Identification ([ILSI Dataset](https://arxiv.org/abs/2112.14731))[Multi-label Text Classification]: Identifying relevant statutes (law articles) based on the facts of a court case
* Semantic Segmentation ([ISS Dataset](https://arxiv.org/abs/1911.05405))[Sentence Tagging]: Segmenting the document into 7 functional parts (semantic segments) such as Facts, Arguments, etc.
* Court Judgment Prediction ([ILDC Dataset](https://arxiv.org/abs/2105.13562))[Binary Text Classification]: Predicting whether the claims/petitions of a court case will be accepted/rejected
### Citation
```
@inproceedings{paul-2022-pretraining,
url = {https://arxiv.org/abs/2209.06049},
author = {Paul, Shounak and Mandal, Arpan and Goyal, Pawan and Ghosh, Saptarshi},
title = {Pre-trained Language Models for the Legal Domain: A Case Study on Indian Law},
booktitle = {Proceedings of 19th International Conference on Artificial Intelligence and Law - ICAIL 2023}
year = {2023},
}
```
### About Us
We are a group of researchers from the Department of Computer Science and Technology, Indian Insitute of Technology, Kharagpur.
Our research interests are primarily ML and NLP applications for the legal domain, with a special focus on the challenges and oppurtunites for the Indian legal scenario.
We have, and are currently working on several legal tasks such as:
* named entity recognition, summarization of legal documents
* semantic segmentation of legal documents
* legal statute identification from facts, court judgment prediction
* legal document matching
You can find our publicly available codes and datasets [here](https://github.com/Law-AI).
|
MichelNivard/hexcoder
|
MichelNivard
| 2023-06-15T17:58:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"custom_code",
"dataset:bigcode/the-stack",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-15T08:10:59Z |
---
datasets:
- bigcode/the-stack
---
# hexcoder

This is a model that trains the base [santacoder model](https://huggingface.co/bigcode/santacoder) on all r code and rmarkdown code in "the stack". Training for 6 epochs on 512 token length snippets of r and rmarkdown code. While there isnt that much r code in the stack (far less then python or java...) this should at least give the model some r skills
Because I am on a limited compute budget, I trained the model on 512 token length pieces of R code, this means that for longer pieces of code it will do poorly. I will now proceed to fine tune the base model on 2048 context length pieces of r code in a parameter efficient way, for another 2 epochs (to ensure acceptable performance beyond 512 tokens).
Then I intend to instruction tune the model on all stackoverflow questions and anwsers with the tag 'r' in the 2011 to 2016 timeframe, presenting stackoverflow questions as <|human|> and the best answer as <|assistant|>. This will teach the model that it is expected to produce an answer to a user's question about 'r'.
The intended outcome is a reasonably adequate model which can answer basic r user questions, but more broadly an evaluaino of the data/sources and training needed to produce great open source code generating models for r.
|
sofia-todeschini/BioLinkBERT-LitCovid-v1.0
|
sofia-todeschini
| 2023-06-15T17:44:27Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-31T18:48:52Z |
---
license: mit
---
# BioLinkBERT-LitCovid-v1.0
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1098
- F1: 0.8992
- Roc Auc: 0.9330
- Accuracy: 0.7945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.1172 | 1.0 | 3120 | 0.1098 | 0.8992 | 0.9330 | 0.7945 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
GyanShashwat/distilbert-base-uncased-finetuned-test-data
|
GyanShashwat
| 2023-06-15T17:39:11Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-15T15:20:01Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: GyanShashwat/distilbert-base-uncased-finetuned-test-data
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# GyanShashwat/distilbert-base-uncased-finetuned-test-data
This model is a fine-tuned version of [GyanShashwat/distilbert-base-uncased-finetuned-test-data](https://huggingface.co/GyanShashwat/distilbert-base-uncased-finetuned-test-data) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.0539
- Train End Logits Accuracy: 0.0
- Train Start Logits Accuracy: 0.0
- Epoch: 75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.01, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:-----:|
| 6.5953 | 0.0 | 0.0 | 0 |
| 6.0959 | 0.0 | 0.0 | 1 |
| 6.0750 | 0.0 | 0.1429 | 2 |
| 6.2449 | 0.0 | 0.0 | 3 |
| 6.6021 | 0.0 | 0.0 | 4 |
| 6.4264 | 0.0 | 0.0 | 5 |
| 6.6183 | 0.0 | 0.0 | 6 |
| 6.4572 | 0.0 | 0.0 | 7 |
| 6.2062 | 0.0 | 0.0 | 8 |
| 6.3750 | 0.0 | 0.0 | 9 |
| 6.4880 | 0.0 | 0.0 | 10 |
| 6.6889 | 0.0 | 0.0 | 11 |
| 6.0914 | 0.0 | 0.0 | 12 |
| 6.0446 | 0.0 | 0.0 | 13 |
| 6.8131 | 0.0 | 0.0 | 14 |
| 6.9439 | 0.0 | 0.0 | 15 |
| 6.0789 | 0.0 | 0.0 | 16 |
| 6.3060 | 0.0 | 0.0 | 17 |
| 6.1862 | 0.0 | 0.0 | 18 |
| 6.4202 | 0.0 | 0.0 | 19 |
| 6.0899 | 0.0 | 0.0 | 20 |
| 6.4460 | 0.0 | 0.0 | 21 |
| 6.0554 | 0.0 | 0.0 | 22 |
| 6.1655 | 0.0 | 0.0 | 23 |
| 6.3298 | 0.0 | 0.0 | 24 |
| 6.1062 | 0.0 | 0.0 | 25 |
| 6.2737 | 0.0 | 0.0 | 26 |
| 6.1412 | 0.0 | 0.0 | 27 |
| 6.2286 | 0.0 | 0.0 | 28 |
| 6.2041 | 0.0 | 0.0 | 29 |
| 6.7055 | 0.0 | 0.0 | 30 |
| 6.2596 | 0.0 | 0.0 | 31 |
| 6.7166 | 0.0 | 0.0 | 32 |
| 6.1891 | 0.0 | 0.0 | 33 |
| 6.1920 | 0.0 | 0.0 | 34 |
| 6.2608 | 0.0 | 0.0 | 35 |
| 6.0968 | 0.0 | 0.0 | 36 |
| 6.6072 | 0.0 | 0.0 | 37 |
| 6.2966 | 0.0 | 0.0 | 38 |
| 6.4528 | 0.0 | 0.0 | 39 |
| 6.5660 | 0.0 | 0.0 | 40 |
| 6.3345 | 0.0 | 0.0 | 41 |
| 6.1812 | 0.0 | 0.0 | 42 |
| 6.1986 | 0.0 | 0.0 | 43 |
| 6.2477 | 0.0 | 0.0 | 44 |
| 6.2783 | 0.0 | 0.0 | 45 |
| 6.7758 | 0.0 | 0.0 | 46 |
| 6.0984 | 0.0 | 0.0 | 47 |
| 6.1547 | 0.0 | 0.0 | 48 |
| 6.1153 | 0.0 | 0.0 | 49 |
| 6.2574 | 0.0 | 0.0 | 50 |
| 5.9857 | 0.0 | 0.0 | 51 |
| 6.1978 | 0.0 | 0.0 | 52 |
| 6.4674 | 0.0 | 0.0 | 53 |
| 6.0991 | 0.0 | 0.0 | 54 |
| 6.2534 | 0.0 | 0.0 | 55 |
| 6.1088 | 0.0 | 0.0 | 56 |
| 5.8161 | 0.0 | 0.0 | 57 |
| 5.9146 | 0.0 | 0.0 | 58 |
| 6.2400 | 0.0 | 0.0 | 59 |
| 6.2602 | 0.1429 | 0.0 | 60 |
| 6.0889 | 0.0 | 0.0 | 61 |
| 6.2283 | 0.0 | 0.0 | 62 |
| 6.4321 | 0.0 | 0.0 | 63 |
| 6.6588 | 0.0 | 0.0 | 64 |
| 6.2557 | 0.0 | 0.0 | 65 |
| 6.2958 | 0.0 | 0.0 | 66 |
| 6.1113 | 0.0 | 0.0 | 67 |
| 6.3594 | 0.0 | 0.0 | 68 |
| 5.9983 | 0.0 | 0.0 | 69 |
| 6.0230 | 0.0 | 0.1429 | 70 |
| 6.1085 | 0.0 | 0.0 | 71 |
| 6.3313 | 0.0 | 0.0 | 72 |
| 6.4739 | 0.0 | 0.0 | 73 |
| 6.1131 | 0.0 | 0.0 | 74 |
| 6.0539 | 0.0 | 0.0 | 75 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
terasys/angelchan
|
terasys
| 2023-06-15T17:28:57Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-06T13:53:53Z |
---
license: creativeml-openrail-m
---
|
nisaar/falcon7b-Indian_Lawyer
|
nisaar
| 2023-06-15T17:19:35Z | 0 | 2 | null |
[
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-06-15T16:43:40Z |
---
language:
- en
Tags:
- fine-tuned
- legal
- Indian law
license: "apache-2.0"
metrics:
- perplexity
---
# Fine-Tuned Falcon 7B - Indian Law
This is a Falcon 7B model fine-tuned for question answering in the domain of Indian law. It has been trained to answer questions regarding various aspects of the Indian legal system, such as the Constitution, the roles of governmental positions, and more.
## Model Description
Falcon is a family of state-of-the-art language models created by the Technology Innovation Institute in Abu Dhabi. This version, Falcon 7B, has been fine-tuned to specialize in understanding and generating responses related to Indian law. The model was trained on a custom dataset composed of question-answer pairs about Indian law.
## How to use
You can use this model for generating responses. Here is how to do it:
```python
from transformers import pipeline
generator = pipeline('text-generation', model='path_to_your_model')
print(generator("<human>: What is the role of the Judiciary as per the Constitution of India?", max_length=100))
|
jiayanli/my-awesome-setfit-model
|
jiayanli
| 2023-06-15T17:02:14Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-15T17:01:23Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# jiayanli/my-awesome-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("jiayanli/my-awesome-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
VishaalY/revasser-stable-diffusion-1-5
|
VishaalY
| 2023-06-15T16:10:52Z | 43 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-15T16:07:00Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### revasser-stable-diffusion-1.5 Dreambooth model trained by VishaalY with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
dnjdsxor21/roberta-korquad-wiki
|
dnjdsxor21
| 2023-06-15T16:06:33Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"ko",
"endpoints_compatible",
"region:us"
] | null | 2023-06-14T15:15:45Z |
---
language:
- ko
metrics:
- exact_match
- f1
---
### finetuned version from `klue/roberta-large` with qa data
data : korquad v1 + wiki
```python
config = AutoConfig.from_pretrained('dnjdsxor21/roberta-korquad-wiki')
RobertaModelForQuestionAnswering.from_pretrained('dnjdsxor21/roberta-korquad-wiki', config=config)
BertTokenizer.from_pretrained('dnjdsxor21/roberta-korquad-wiki')
```
|
MariaK/distilhubert-finetuned-gtzan
|
MariaK
| 2023-06-15T15:32:33Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-08T14:58:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5757
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7582 | 1.0 | 113 | 1.7912 | 0.45 |
| 1.2332 | 2.0 | 226 | 1.3051 | 0.64 |
| 1.0058 | 3.0 | 339 | 1.0200 | 0.71 |
| 0.6894 | 4.0 | 452 | 0.8303 | 0.79 |
| 0.5041 | 5.0 | 565 | 0.7038 | 0.79 |
| 0.3281 | 6.0 | 678 | 0.6500 | 0.82 |
| 0.2457 | 7.0 | 791 | 0.5476 | 0.82 |
| 0.3409 | 8.0 | 904 | 0.5793 | 0.83 |
| 0.1521 | 9.0 | 1017 | 0.5568 | 0.82 |
| 0.3542 | 10.0 | 1130 | 0.5757 | 0.83 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hinojosaad/distilbert-base-uncased-finetuned-emotion
|
hinojosaad
| 2023-06-15T15:31:18Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T14:58:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9264499182410045
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2076
- Accuracy: 0.9265
- F1: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8248 | 1.0 | 250 | 0.3008 | 0.9105 | 0.9087 |
| 0.2435 | 2.0 | 500 | 0.2076 | 0.9265 | 0.9264 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nini2/ti
|
nini2
| 2023-06-15T15:26:09Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-06-15T15:22:12Z |
---
license: bigscience-openrail-m
---
|
elbanhawy/bard_PDF_QA
|
elbanhawy
| 2023-06-15T15:22:26Z | 0 | 0 |
transformers
|
[
"transformers",
"license:openrail",
"endpoints_compatible",
"region:us"
] | null | 2023-06-15T15:16:59Z |
---
license: openrail
library_name: transformers
Model: AutoModelForQuestionAnswering
Pretrained Model: bard
Learning Rate: 0.0001
Batch Size: 32
Epochs: 10
---
|
EducativeCS2023/whisper-en-tiny-trained
|
EducativeCS2023
| 2023-06-15T15:20:42Z | 88 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-15T11:47:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-en-tiny-trained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-en-tiny-trained
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4552
- Wer: 92.5515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8547 | 1.0 | 60 | 2.0399 | 100.1585 |
| 1.0927 | 2.0 | 120 | 1.4552 | 92.5515 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
anth0nyhak1m/CFGFP_BasicTypeCalssifier
|
anth0nyhak1m
| 2023-06-15T15:00:14Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T14:59:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: CFGFP_BasicTypeCalssifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CFGFP_BasicTypeCalssifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9680
- Accuracy: 0.8450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4133 | 1.0 | 3321 | 1.2102 | 0.8081 |
| 0.9236 | 2.0 | 6642 | 0.9680 | 0.8450 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
crlandsc/tiny-audio-diffusion-hihats
|
crlandsc
| 2023-06-15T14:58:58Z | 5 | 2 | null |
[
"audio",
"diffusion",
"waveform diffusion",
"audio diffusion",
"unet",
"region:us"
] | null | 2023-06-15T14:46:17Z |
---
tags:
- audio
- diffusion
- waveform diffusion
- audio diffusion
- unet
---
# Model Card for tiny-audio-diffusion-hihats
Hi-hat drum model for tiny-audio-diffusion. Use with [tiny-audio-diffusion](https://github.com/crlandsc/tiny-audio-diffusion) repo to generate hi-hat samples.
|
kudeponay/CNAnyLoRA
|
kudeponay
| 2023-06-15T14:52:51Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T14:51:14Z |
---
license: creativeml-openrail-m
---
|
leFalcon/finetuning-sentiment-model-3000-samples
|
leFalcon
| 2023-06-15T14:48:41Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-13T23:44:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.7933333333333333
- name: F1
type: f1
value: 0.7905405405405405
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4584
- Accuracy: 0.7933
- F1: 0.7905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
NbAiLabArchive/scream_sextusdecimus_virtual_tsfix_medium_1e5
|
NbAiLabArchive
| 2023-06-15T14:43:10Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"audio",
"asr",
"hf-asr-leaderboard",
"no",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-14T05:27:02Z |
---
language:
- 'no'
license: apache-2.0
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
model-index:
- name: scream_sextusdecimus_virtual_tsfix_medium_1e5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# scream_sextusdecimus_virtual_tsfix_medium_1e5
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the NbAiLab/ncc_speech dataset.
It achieves the following results on the evaluation set:
- step: 19999
- eval_loss: 1.6336
- train_loss: 0.6795
- eval_wer: 7.9120
- eval_cer: 3.4474
- eval_exact_wer: 7.9120
- eval_exact_cer: 3.4474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- lr_scheduler_type: linear
- per_device_train_batch_size: 16
- total_train_batch_size_per_node: 64
- total_train_batch_size: 512
- total_optimization_steps: 20,000
- starting_optimization_step: None
- finishing_optimization_step: 20,000
- num_train_dataset_workers: 32
- num_hosts: 8
- total_num_training_examples: 10,240,000
- steps_per_epoch: _To be computed after first epoch_
- num_beams: None
- dropout: True
- bpe_dropout_probability: 0.1
- activation_dropout_probability: 0.1
### Training results
| step | eval_loss | train_loss | eval_wer | eval_cer | eval_exact_wer | eval_exact_cer |
|:-----:|:---------:|:----------:|:--------:|:--------:|:--------------:|:--------------:|
| 0 | 5.5890 | 2.8362 | 17.4598 | 5.3906 | 17.4598 | 5.3906 |
| 1000 | 5.2798 | 1.0896 | 12.4926 | 3.8321 | 12.4926 | 3.8321 |
| 2000 | 5.2432 | 0.9018 | 11.0351 | 3.9899 | 11.0351 | 3.9899 |
| 3000 | 4.1719 | 0.8159 | 9.8453 | 3.8173 | 9.8453 | 3.8173 |
| 4000 | 3.0758 | 0.7799 | 9.6371 | 3.8716 | 9.6371 | 3.8716 |
| 5000 | 2.2223 | 0.7803 | 9.7264 | 3.9110 | 9.7264 | 3.9110 |
| 6000 | 2.0574 | 0.7206 | 9.5181 | 3.8864 | 9.5181 | 3.8864 |
| 7000 | 1.7271 | 0.7088 | 8.7745 | 3.7039 | 8.7745 | 3.7039 |
| 8000 | 1.5868 | 0.7528 | 8.2391 | 3.5362 | 8.2391 | 3.5362 |
| 9000 | 1.5781 | 0.6747 | 8.2094 | 3.5313 | 8.2094 | 3.5313 |
| 10000 | 1.6658 | 0.6830 | 8.1499 | 3.4277 | 8.1499 | 3.4277 |
| 11000 | 1.5514 | 0.7141 | 8.6853 | 3.8814 | 8.6853 | 3.8814 |
| 12000 | 1.8042 | 0.6941 | 8.5366 | 3.6792 | 8.5366 | 3.6792 |
| 13000 | 1.7561 | 0.6732 | 8.6258 | 3.8666 | 8.6258 | 3.8666 |
| 14000 | 1.7517 | 0.7050 | 8.2094 | 3.5066 | 8.2094 | 3.5066 |
| 15000 | 1.7413 | 0.7191 | 7.8822 | 3.3389 | 7.8822 | 3.3389 |
| 16000 | 1.7014 | 0.6850 | 8.0309 | 3.4178 | 8.0309 | 3.4178 |
| 17000 | 1.7205 | 0.6937 | 7.8822 | 3.4524 | 7.8822 | 3.4524 |
| 18000 | 1.5928 | 0.7014 | 7.8227 | 3.4425 | 7.8227 | 3.4425 |
| 19000 | 1.5883 | 0.7102 | 7.9417 | 3.4573 | 7.9417 | 3.4573 |
| 19999 | 1.6336 | 0.6795 | 7.9120 | 3.4474 | 7.9120 | 3.4474 |
### Framework versions
- Transformers 4.30.0.dev0
- Datasets 2.12.1.dev0
- Tokenizers 0.13.3
|
7sunshine/noniw
|
7sunshine
| 2023-06-15T14:38:40Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T14:37:16Z |
---
license: creativeml-openrail-m
---
|
TheBloke/starchat-beta-GGML
|
TheBloke
| 2023-06-15T14:30:49Z | 12 | 34 |
transformers
|
[
"transformers",
"starcoder",
"generated_from_trainer",
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-06-08T22:29:50Z |
---
inference: false
tags:
- generated_from_trainer
model-index:
- name: starchat-beta
results: []
license: bigcode-openrail-m
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# HuggingFaceH4's Starchat Beta GGML
These files are GGML format model files for [HuggingFaceH4's Starchat Beta](https://huggingface.co/HuggingFaceH4/starchat-beta).
Please note that these GGMLs are **not compatible with llama.cpp, or currently with text-generation-webui**. Please see below for a list of tools known to work with these model files.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/starchat-beta-GPTQ)
* [4, 5, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/starchat-beta-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/HuggingFaceH4/starchat-beta)
## Prompt template
```
<|system|> system message goes here <|end|>
<|user|> prompt goes here <|end|>
<|assistant|>
```
Example:
```
<|system|> Below is a conversation between a human user and a helpful AI coding assistant. <|end|>
<|user|> How do I sort a list in Python? <|end|>
<|assistant|>
```
## Live demo and API
[Matt Hoffner](https://huggingface.co/matthoffner) has created two Spaces for this model, using the GGML files provided in this repo:
* API: https://huggingface.co/spaces/matthoffner/starchat-ggml
* UI: https://huggingface.co/spaces/matthoffner/starchat-ui
<!-- compatibility_ggml start -->
## Compatibilty
These files are **not** compatible with llama.cpp.
Currently they can be used with:
* KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers)
* The GPT4All-UI which uses ctransformers: [GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [rustformers' llm](https://github.com/rustformers/llm)
* The example `starcoder` binary provided with [ggml](https://github.com/ggerganov/ggml)
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
## Tutorial for using GPT4All-UI
* [Text tutorial, written by **Lucas3DCG**](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML/discussions/2#6475d914e9b57ce0caa68888)
* [Video tutorial, by GPT4All-UI's author **ParisNeo**](https://www.youtube.com/watch?v=ds_U0TDzbzI)
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| starchat-beta.ggmlv3.q4_0.bin | q4_0 | 4 | 10.75 GB | 13.25 GB | Original llama.cpp quant method, 4-bit. |
| starchat-beta.ggmlv3.q4_1.bin | q4_1 | 4 | 11.92 GB | 14.42 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| starchat-beta.ggmlv3.q5_0.bin | q5_0 | 5 | 13.09 GB | 15.59 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| starchat-beta.ggmlv3.q5_1.bin | q5_1 | 5 | 14.26 GB | 16.76 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| starchat-beta.ggmlv3.q8_0.bin | q8_0 | 8 | 20.11 GB | 22.61 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: HuggingFaceH4's Starchat Beta
<img src="https://huggingface.co/HuggingFaceH4/starchat-beta/resolve/main/model_logo.png" alt="StarChat Beta Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for StarChat Beta
StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat Beta is the second model in the series, and is a fine-tuned version of [StarCoderPlus](https://huggingface.co/bigcode/starcoderplus) that was trained on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). We found that removing the in-built alignment of the OpenAssistant dataset boosted performance on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) and made the model more helpful at coding tasks. However, this means that model is likely to generate problematic text when prompted to do so and should only be used for educational and research purposes.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** A 16B parameter GPT-like model fine-tuned on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
- **Language(s) (NLP):** Primarily English and 80+ programming languages.
- **License:** BigCode Open RAIL-M v1
- **Finetuned from model:** [bigcode/starcoderplus](https://huggingface.co/bigcode/starcoderplus)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bigcode-project/starcoder
- **Demo:** https://huggingface.co/spaces/HuggingFaceH4/starchat-playground
## Intended uses & limitations
The model was fine-tuned on a variant of the [`OpenAssistant/oasst1`](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset, which contains a diverse range of dialogues in over 35 languages. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground) to test its coding capabilities.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="HuggingFaceH4/starchat-beta", torch_dtype=torch.bfloat16, device_map="auto")
prompt_template = "<|system|>\n<|end|>\n<|user|>\n{query}<|end|>\n<|assistant|>"
prompt = prompt_template.format(query="How do I sort a list in Python?")
# We use a special <|end|> token with ID 49155 to denote ends of a turn
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=50, top_p=0.95, eos_token_id=49155)
# You can sort a list in Python by using the sort() method. Here's an example:\n\n```\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\nnumbers.sort()\nprint(numbers)\n```\n\nThis will sort the list in place and print the sorted list.
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
StarChat Alpha has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) which is derived from The Stack.
Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.
For example, it may produce code that does not compile or that produces incorrect results.
It may also produce code that is vulnerable to security exploits.
We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.
StarChat Alpha was fine-tuned from the base model [StarCoder Base](https://huggingface.co/bigcode/starcoderbase), please refer to its model card's [Limitations Section](https://huggingface.co/bigcode/starcoderbase#limitations) for relevant information.
In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its [technical report](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view).
## Training and evaluation data
StarChat Beta is trained on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). We applied the same [recipe](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered/blob/main/wizardlm_clean.py) used to filter the ShareGPT datasets behind the [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5321 | 0.98 | 15 | 1.2856 |
| 1.2071 | 1.97 | 30 | 1.2620 |
| 1.0162 | 2.95 | 45 | 1.2853 |
| 0.8484 | 4.0 | 61 | 1.3274 |
| 0.6981 | 4.98 | 76 | 1.3994 |
| 0.5668 | 5.9 | 90 | 1.4720 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{Tunstall2023starchat-alpha,
author = {Tunstall, Lewis and Lambert, Nathan and Rajani, Nazneen and Beeching, Edward and Le Scao, Teven and von Werra, Leandro and Han, Sheon and Schmid, Philipp and Rush, Alexander},
title = {Creating a Coding Assistant with StarCoder},
journal = {Hugging Face Blog},
year = {2023},
note = {https://huggingface.co/blog/starchat},
}
```
|
qhduan/aquilachat-7b
|
qhduan
| 2023-06-15T14:17:09Z | 21 | 17 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"zh",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-13T20:54:30Z |
---
language:
- zh
---
https://github.com/FlagAI-Open/FlagAI/tree/master/examples/Aquila
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('qhduan/aquilachat-7b')
model = AutoModelForCausalLM.from_pretrained('qhduan/aquilachat-7b', trust_remote_code=True)
model = model.eval().half().cuda()
question = '北京为什么是中国的首都?'
prompt = (
'''A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.'''
f'''###Human: {question}###Assistant:'''
)
with torch.no_grad():
ret = model.generate(
**tokenizer(prompt, return_tensors='pt').to('cuda'),
do_sample=False,
max_new_tokens=200,
use_cache=True
)
output_ids = ret[0].detach().cpu().numpy().tolist()
if 100007 in output_ids:
output_ids = output_ids[:output_ids.index(100007)]
elif 0 in output_ids:
output_ids = output_ids[:output_ids.index(0)]
# 北京之所以成为中国的首都,是因为它在中国历史和文化中的重要地位和政治、经济、文化等方面的影响力。
print(tokenizer.decode(output_ids))
```
Aquila-7B和Aquila-33B开源模型使用 [智源Aquila系列模型许可协议](https://github.com/FlagAI-Open/FlagAI/blob/master/BAAI_Aquila_Model_License.pdf), 原始代码基于Apache Licence 2.0。
|
hopkins/marian-finetuned-kde4-en-to-fr
|
hopkins
| 2023-06-15T14:03:32Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-06-14T22:12:48Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0615
- Bleu: 37.3551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.12.0
- Tokenizers 0.13.3
|
h-d-h/ppo-Huggy
|
h-d-h
| 2023-06-15T14:02:10Z | 12 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-15T14:01:59Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: h-d-h/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gokuls/hBERTv1_new_pretrain_48_emb_com_wnli
|
gokuls
| 2023-06-15T13:55:37Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T13:51:32Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_emb_com_wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_emb_com_wnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6859
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8985 | 1.0 | 5 | 0.9144 | 0.4366 |
| 0.7419 | 2.0 | 10 | 0.7704 | 0.4366 |
| 0.7079 | 3.0 | 15 | 0.7121 | 0.4366 |
| 0.6978 | 4.0 | 20 | 0.6859 | 0.5634 |
| 0.7001 | 5.0 | 25 | 0.7479 | 0.4366 |
| 0.7268 | 6.0 | 30 | 0.6904 | 0.5634 |
| 0.7028 | 7.0 | 35 | 0.7271 | 0.4366 |
| 0.7096 | 8.0 | 40 | 0.6870 | 0.5634 |
| 0.6953 | 9.0 | 45 | 0.7185 | 0.4366 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ArthurZ/encodec_24khz
|
ArthurZ
| 2023-06-15T13:50:42Z | 121 | 1 |
transformers
|
[
"transformers",
"pytorch",
"encodec",
"feature-extraction",
"arxiv:2210.13438",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-06-14T06:47:39Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---

# Model Card for EnCodec
This model card provides details and information about EnCodec, a state-of-the-art real-time audio codec developed by Meta AI.
## Model Details
### Model Description
EnCodec is a high-fidelity audio codec leveraging neural networks. It introduces a streaming encoder-decoder architecture with quantized latent space, trained in an end-to-end fashion.
The model simplifies and speeds up training using a single multiscale spectrogram adversary that efficiently reduces artifacts and produces high-quality samples.
It also includes a novel loss balancer mechanism that stabilizes training by decoupling the choice of hyperparameters from the typical scale of the loss.
Additionally, lightweight Transformer models are used to further compress the obtained representation while maintaining real-time performance.
- **Developed by:** Meta AI
- **Model type:** Audio Codec
### Model Sources
- **Repository:** [GitHub Repository](https://github.com/facebookresearch/encodec)
- **Paper:** [EnCodec: End-to-End Neural Audio Codec](https://arxiv.org/abs/2210.13438)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
EnCodec can be used directly as an audio codec for real-time compression and decompression of audio signals.
It provides high-quality audio compression and efficient decoding. The model was trained on various bandwiths, which can be specified when encoding (compressing) and decoding (decompressing).
Two different setup exist for EnCodec:
- Non-streamable: the input audio is split into chunks of 1 seconds, with an overlap of 10 ms, which are then encoded.
- Streamable: weight normalizationis used on the convolution layers, and the input is not split into chunks but rather padded on the left.
### Downstream Use
EnCodec can be fine-tuned for specific audio tasks or integrated into larger audio processing pipelines for applications such as speech generation,
music generation, or text to speech tasks.
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## How to Get Started with the Model
Use the following code to get started with the EnCodec model using a dummy example from the LibriSpeech dataset (~9MB). First, install the required Python packages:
```
pip install --upgrade pip
pip install --upgrade transformers datasets[audio]
```
Then load an audio sample, and run a forward pass of the model:
```python
from datasets import load_dataset, Audio
from transformers import EncodecModel, AutoProcessor
# load a demonstration datasets
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
# load the model + processor (for pre-processing the audio)
model = EncodecModel.from_pretrained("facebook/encodec_24khz")
processor = AutoProcessor.from_pretrained("facebook/encodec_24khz")
# cast the audio data to the correct sampling rate for the model
librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate))
audio_sample = librispeech_dummy[0]["audio"]["array"]
# pre-process the inputs
inputs = processor(raw_audio=audio_sample, sampling_rate=processor.sampling_rate, return_tensors="pt")
# explicitly encode then decode the audio inputs
encoder_outputs = model.encode(inputs["input_values"], inputs["padding_mask"])
audio_values = model.decode(encoder_outputs.audio_codes, encoder_outputs.audio_scales, inputs["padding_mask"])[0]
# or the equivalent with a forward pass
audio_values = model(inputs["input_values"], inputs["padding_mask"]).audio_values
```
## Training Details
The model was trained for 300 epochs, with one epoch being 2,000 updates with the Adam optimizer with a batch size of 64 examples of 1 second each, a learning rate of 3 · 10−4
, β1 = 0.5, and β2 = 0.9. All the models are traind using 8 A100 GPUs.
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
- For speech:
- DNS Challenge 4
- [Common Voice](https://huggingface.co/datasets/common_voice)
- For general audio:
- [AudioSet](https://huggingface.co/datasets/Fhrozen/AudioSet2K22)
- [FSD50K](https://huggingface.co/datasets/Fhrozen/FSD50k)
- For music:
- [Jamendo dataset](https://huggingface.co/datasets/rkstgr/mtg-jamendo)
They used four different training strategies to sample for these datasets:
- (s1) sample a single source from Jamendo with probability 0.32;
- (s2) sample a single source from the other datasets with the same probability;
- (s3) mix two sources from all datasets with a probability of 0.24;
- (s4) mix three sources from all datasets except music with a probability of 0.12.
The audio is normalized by file and a random gain between -10 and 6 dB id applied.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Subjectif metric for restoration:
This models was evalutated using the MUSHRA protocol (Series, 2014), using both a hidden reference and a low anchor. Annotators were recruited using a
crowd-sourcing platform, in which they were asked to rate the perceptual quality of the provided samples in
a range between 1 to 100. They randomly select 50 samples of 5 seconds from each category of the the test set
and force at least 10 annotations per samples. To filter noisy annotations and outliers we remove annotators
who rate the reference recordings less then 90 in at least 20% of the cases, or rate the low-anchor recording
above 80 more than 50% of the time.
### Objective metric for restoration:
The ViSQOL()ink) metric was used together with the Scale-Invariant Signal-to-Noise Ration (SI-SNR) (Luo & Mesgarani, 2019;
Nachmani et al., 2020; Chazan et al., 2021).
### Results
The results of the evaluation demonstrate the superiority of EnCodec compared to the baselines across different bandwidths (1.5, 3, 6, and 12 kbps).
When comparing EnCodec with the baselines at the same bandwidth, EnCodec consistently outperforms them in terms of MUSHRA score.
Notably, EnCodec achieves better performance, on average, at 3 kbps compared to Lyra-v2 at 6 kbps and Opus at 12 kbps.
Additionally, by incorporating the language model over the codes, it is possible to achieve a bandwidth reduction of approximately 25-40%.
For example, the bandwidth of the 3 kbps model can be reduced to 1.9 kbps.
#### Summary
EnCodec is a state-of-the-art real-time neural audio compression model that excels in producing high-fidelity audio samples at various sample rates and bandwidths.
The model's performance was evaluated across different settings, ranging from 24kHz monophonic at 1.5 kbps to 48kHz stereophonic, showcasing both subjective and
objective results. Notably, EnCodec incorporates a novel spectrogram-only adversarial loss, effectively reducing artifacts and enhancing sample quality.
Training stability and interpretability were further enhanced through the introduction of a gradient balancer for the loss weights.
Additionally, the study demonstrated that a compact Transformer model can be employed to achieve an additional bandwidth reduction of up to 40% without compromising
quality, particularly in applications where low latency is not critical (e.g., music streaming).
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{défossez2022high,
title={High Fidelity Neural Audio Compression},
author={Alexandre Défossez and Jade Copet and Gabriel Synnaeve and Yossi Adi},
year={2022},
eprint={2210.13438},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
|
morenolq/bart-it-WITS
|
morenolq
| 2023-06-15T13:50:21Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"it",
"dataset:Silvia/WITS",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-12-27T16:16:49Z |
---
language: "it"
license: mit
datasets:
- Silvia/WITS
tags:
- bart
- pytorch
pipeline:
- summarization
---
# BART-IT - WITS
BART-IT is a sequence-to-sequence model, based on the BART architecture that is specifically tailored to the Italian language. The model is pre-trained on a [large corpus of Italian text](https://huggingface.co/datasets/gsarti/clean_mc4_it), and can be fine-tuned on a variety of tasks.
## Model description
The model is a `base-`sized BART model, with a vocabulary size of 52,000 tokens. It has 140M parameters and can be used for any task that requires a sequence-to-sequence model. It is trained from scratch on a large corpus of Italian text, and can be fine-tuned on a variety of tasks.
## Pre-training
The code used to pre-train BART-IT together with additional information on model parameters can be found [here](https://github.com/MorenoLaQuatra/bart-it).
## Fine-tuning
The model has been fine-tuned for the abstractive summarization task on 3 different Italian datasets:
- [FanPage](https://huggingface.co/datasets/ARTeLab/fanpage) - finetuned model [here](https://huggingface.co/MorenoLaQuatra/bart-it-fanpage)
- [IlPost](https://huggingface.co/datasets/ARTeLab/ilpost) - finetuned model [here](https://huggingface.co/morenolq/bart-it-ilpost)
- **This model** [WITS](https://huggingface.co/datasets/Silvia/WITS) - finetuned model [here](https://huggingface.co/morenolq/bart-it-WITS)
## Usage
In order to use the model, you can use the following code:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("morenolq/bart-it-WITS")
model = AutoModelForSeq2SeqLM.from_pretrained("morenolq/bart-it-WITS")
input_ids = tokenizer.encode("Il modello BART-IT è stato pre-addestrato su un corpus di testo italiano", return_tensors="pt")
outputs = model.generate(input_ids, max_length=40, num_beams=4, early_stopping=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
# Citation
If you find this model useful for your research, please cite the following paper:
```bibtex
@Article{BARTIT,
AUTHOR = {La Quatra, Moreno and Cagliero, Luca},
TITLE = {BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization},
JOURNAL = {Future Internet},
VOLUME = {15},
YEAR = {2023},
NUMBER = {1},
ARTICLE-NUMBER = {15},
URL = {https://www.mdpi.com/1999-5903/15/1/15},
ISSN = {1999-5903},
DOI = {10.3390/fi15010015}
}
```
|
morenolq/SumTO_FNS2020
|
morenolq
| 2023-06-15T13:50:13Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
This is the *best performing* model used in the paper: "End-to-end Training For Financial Report Summarization"
https://www.aclweb.org/anthology/2020.fnp-1.20/
|
Tommert25/robbertfinetuned0906
|
Tommert25
| 2023-06-15T13:47:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-09T13:42:31Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: robbertfinetuned0906
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robbertfinetuned0906
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5859
- Precision: 0.7151
- Recall: 0.7079
- F1: 0.7115
- Accuracy: 0.9186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.046 | 1.0 | 580 | 0.5770 | 0.6912 | 0.6633 | 0.6769 | 0.9102 |
| 0.0405 | 2.0 | 1160 | 0.5704 | 0.6996 | 0.6835 | 0.6914 | 0.9133 |
| 0.0346 | 3.0 | 1740 | 0.5786 | 0.6951 | 0.7201 | 0.7074 | 0.9130 |
| 0.0242 | 4.0 | 2320 | 0.5453 | 0.7098 | 0.7216 | 0.7157 | 0.9186 |
| 0.0184 | 5.0 | 2900 | 0.6058 | 0.7118 | 0.7036 | 0.7077 | 0.9189 |
| 0.0087 | 6.0 | 3480 | 0.5859 | 0.7151 | 0.7079 | 0.7115 | 0.9186 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
KHEW/OnOffLora
|
KHEW
| 2023-06-15T13:44:03Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T13:42:48Z |
---
license: creativeml-openrail-m
---
|
gokuls/hBERTv2_new_no_pretrain_mnli
|
gokuls
| 2023-06-15T13:35:26Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T12:22:06Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_no_pretrain_mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.3522172497965826
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_no_pretrain_mnli
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0983
- Accuracy: 0.3522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1022 | 1.0 | 3068 | 1.0986 | 0.3182 |
| 1.0988 | 2.0 | 6136 | 1.0982 | 0.3545 |
| 1.0987 | 3.0 | 9204 | 1.0986 | 0.3274 |
| 1.0988 | 4.0 | 12272 | 1.0988 | 0.3182 |
| 1.0986 | 5.0 | 15340 | 1.0986 | 0.3274 |
| 1.0987 | 6.0 | 18408 | 1.0986 | 0.3182 |
| 1.0986 | 7.0 | 21476 | 1.0986 | 0.3182 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
raghvendramall/esm2_t12_35M_UR50D-crystallization-finetuned-localization
|
raghvendramall
| 2023-06-15T13:12:38Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"esm",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T11:57:53Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: esm2_t12_35M_UR50D-crystallization-finetuned-localization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t12_35M_UR50D-crystallization-finetuned-localization
This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7419
- F1: 0.5791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 267 | 0.4285 | 0.4983 |
| 0.4409 | 2.0 | 534 | 0.4159 | 0.6386 |
| 0.4409 | 3.0 | 801 | 0.4282 | 0.5989 |
| 0.2942 | 4.0 | 1068 | 0.4542 | 0.6102 |
| 0.2942 | 5.0 | 1335 | 0.5155 | 0.5899 |
| 0.1774 | 6.0 | 1602 | 0.5666 | 0.6126 |
| 0.1774 | 7.0 | 1869 | 0.6379 | 0.6039 |
| 0.0999 | 8.0 | 2136 | 0.6942 | 0.5822 |
| 0.0999 | 9.0 | 2403 | 0.7298 | 0.5822 |
| 0.0631 | 10.0 | 2670 | 0.7419 | 0.5791 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DucHaiten/DucHaitenJourney
|
DucHaiten
| 2023-06-15T12:58:48Z | 304 | 9 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-12T16:01:27Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
- image-to-image
- diffusers
license: creativeml-openrail-m
inference: true
---
DPM++ 2S a Karras cfg 10
will be better in large resolution 768x768, 512x512 will be poor quality
negative prompt:
illustration, painting, cartoons, sketch, (worst quality:2), (low quality:2), (normal quality:2), lowres, bad anatomy, bad hands, ((monochrome)), ((grayscale)), collapsed eyeshadow, multiple eyeblows, vaginas in breasts, (cropped), oversaturated, extra limb, missing limbs, deformed hands, long neck, long body, imperfect, (bad hands), signature, watermark, username, artist name, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, unnatural face, unnatural body, error
|
Contents/bert-base-uncased-test
|
Contents
| 2023-06-15T12:56:45Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"fill-mask",
"en",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-15T12:50:52Z |
---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: fill-mask
datasets:
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
|
gokuls/hBERTv1_new_pretrain_48_emb_com_qqp
|
gokuls
| 2023-06-15T12:56:15Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-14T19:39:28Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv1_new_pretrain_48_emb_com_qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.789463269849122
- name: F1
type: f1
value: 0.7288135593220338
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_emb_com_qqp
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4383
- Accuracy: 0.7895
- F1: 0.7288
- Combined Score: 0.7591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5492 | 1.0 | 2843 | 0.5130 | 0.7537 | 0.6393 | 0.6965 |
| 0.4928 | 2.0 | 5686 | 0.4971 | 0.7602 | 0.6526 | 0.7064 |
| 0.4578 | 3.0 | 8529 | 0.4656 | 0.7775 | 0.6825 | 0.7300 |
| 0.4346 | 4.0 | 11372 | 0.4565 | 0.7804 | 0.6744 | 0.7274 |
| 0.4146 | 5.0 | 14215 | 0.4783 | 0.7812 | 0.7078 | 0.7445 |
| 0.3952 | 6.0 | 17058 | 0.4675 | 0.7899 | 0.7042 | 0.7470 |
| 0.3747 | 7.0 | 19901 | 0.4383 | 0.7895 | 0.7288 | 0.7591 |
| 0.355 | 8.0 | 22744 | 0.4455 | 0.7948 | 0.7053 | 0.7500 |
| 0.3362 | 9.0 | 25587 | 0.4483 | 0.7935 | 0.7334 | 0.7635 |
| 0.3185 | 10.0 | 28430 | 0.4480 | 0.7956 | 0.7388 | 0.7672 |
| 0.301 | 11.0 | 31273 | 0.4630 | 0.8055 | 0.7236 | 0.7646 |
| 0.2848 | 12.0 | 34116 | 0.4850 | 0.8062 | 0.7352 | 0.7707 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
tannazhp95/q-FrozenLake-v1-4x4-noSlippery_low
|
tannazhp95
| 2023-06-15T12:27:10Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T12:19:36Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery_low
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="tannazhp95/q-FrozenLake-v1-4x4-noSlippery_low", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
pushkin05/MLAgents-SoccerTwos
|
pushkin05
| 2023-06-15T12:25:14Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"onnx",
"ML-Agents-SoccerTwos",
"reinforcement-learning",
"license:cc",
"region:us"
] |
reinforcement-learning
| 2023-06-15T12:23:15Z |
---
license: cc
task: reinforcement-learning
library_name: ml-agents
tags:
- ML-Agents-SoccerTwos
- reinforcement-learning
---
|
Imroz/Taxi-v3
|
Imroz
| 2023-06-15T12:21:07Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T12:21:05Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Imroz/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Imroz/q-FrozenLake-v1-4x4-noSlippery
|
Imroz
| 2023-06-15T12:16:16Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T12:16:13Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Imroz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
pushkin05/LunarLander-v2
|
pushkin05
| 2023-06-15T12:11:28Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T09:55:00Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -144.51 +/- 118.41
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'pushkin05/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
rovargasc/setfit-model_sentencias-v2
|
rovargasc
| 2023-06-15T12:10:39Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-15T12:09:46Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# rovargasc/setfit-model_sentencias-v2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("rovargasc/setfit-model_sentencias-v2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
kejolong/akenoDXD
|
kejolong
| 2023-06-15T12:04:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T11:59:34Z |
---
license: creativeml-openrail-m
---
|
halffried/gyre_vitmatte
|
halffried
| 2023-06-15T11:49:13Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-06-15T11:46:13Z |
---
license: mit
---
Copy of https://github.com/hustvl/ViTMatte model, converted to safetensors.
License from that repository:
MIT License
Copyright (c) 2023 Hust Vision Lab
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
|
Tommert25/robbertfinetuned1506
|
Tommert25
| 2023-06-15T11:44:53Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-15T09:18:34Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: robbertfinetuned1506
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robbertfinetuned1506
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4020
- Precision: 0.6588
- Recall: 0.5806
- F1: 0.6172
- Accuracy: 0.8828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 73 | 0.5045 | 0.5902 | 0.4871 | 0.5337 | 0.86 |
| No log | 2.0 | 146 | 0.4124 | 0.6161 | 0.5612 | 0.5873 | 0.8772 |
| No log | 3.0 | 219 | 0.3974 | 0.6502 | 0.5683 | 0.6065 | 0.8839 |
| No log | 4.0 | 292 | 0.4020 | 0.6588 | 0.5806 | 0.6172 | 0.8828 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
intanm/fewshot-qa-001-20230615-002
|
intanm
| 2023-06-15T11:41:59Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-15T11:21:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: fewshot-qa-001-20230615-002
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fewshot-qa-001-20230615-002
This model is a fine-tuned version of [intanm/mbert-squadv2](https://huggingface.co/intanm/mbert-squadv2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5701 | 2.4 | 500 | 3.1943 |
| 1.178 | 4.81 | 1000 | 3.7416 |
| 0.5312 | 7.21 | 1500 | 4.5243 |
| 0.2682 | 9.62 | 2000 | 4.7782 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
tannazhp95/q-FrozenLake-v1-4x4-noSlippery
|
tannazhp95
| 2023-06-15T11:40:04Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T11:37:22Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="tannazhp95/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
sofia-todeschini/PubMedELECTRA-Large-LitCovid-v1.0
|
sofia-todeschini
| 2023-06-15T11:39:29Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T09:59:21Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: PubMedELECTRA-Large-LitCovid-v1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PubMedELECTRA-Large-LitCovid-v1.0
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedELECTRA-large-uncased-abstract](https://huggingface.co/microsoft/BiomedNLP-PubMedELECTRA-large-uncased-abstract) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1102
- F1: 0.8974
- Roc Auc: 0.9322
- Accuracy: 0.7942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.1183 | 1.0 | 6240 | 0.1102 | 0.8974 | 0.9322 | 0.7942 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Ditrip/ppo-Pyramids
|
Ditrip
| 2023-06-15T11:38:26Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-15T11:35:08Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Ditrip/ppo-pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hannahh7/a2c-AntBulletEnv-v0
|
hannahh7
| 2023-06-15T11:32:13Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-12T21:32:43Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1833.67 +/- 155.85
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Rakoto031/ppo-Huggy
|
Rakoto031
| 2023-06-15T11:24:02Z | 15 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-15T11:23:56Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Rakoto031/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
seeeed/opus-mt-en-ro-finetuned-en-to-ro
|
seeeed
| 2023-06-15T11:23:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-15T09:20:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
config: ro-en
split: validation
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 28.1136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2886
- Bleu: 28.1136
- Gen Len: 34.1056
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7437 | 1.0 | 38145 | 1.2886 | 28.1136 | 34.1056 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
vorstcavry/LoRA-set1
|
vorstcavry
| 2023-06-15T11:15:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-11T14:29:40Z |
---
license: creativeml-openrail-m
---
|
SinghManish/audio-classification-model
|
SinghManish
| 2023-06-15T10:53:20Z | 62 | 1 |
transformers
|
[
"transformers",
"tf",
"wav2vec2",
"feature-extraction",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-06-15T10:52:38Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: audio-classification-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# audio-classification-model
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
dappradar/setfit-games-multilabel
|
dappradar
| 2023-06-15T10:52:13Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T10:21:10Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# dappradar/setfit-games-multilabel
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("dappradar/setfit-games-multilabel")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
raghvendramall/esm2_t6_8M_UR50D-crystallization-finetuned-localization
|
raghvendramall
| 2023-06-15T10:43:27Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"esm",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-23T08:27:12Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: esm2_t6_8M_UR50D-crystallization-finetuned-localization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t6_8M_UR50D-crystallization-finetuned-localization
This model is a fine-tuned version of [facebook/esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4286
- F1: 0.6192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 267 | 0.4463 | 0.4978 |
| 0.4639 | 2.0 | 534 | 0.4197 | 0.6117 |
| 0.4639 | 3.0 | 801 | 0.4122 | 0.6221 |
| 0.3671 | 4.0 | 1068 | 0.4069 | 0.6219 |
| 0.3671 | 5.0 | 1335 | 0.4059 | 0.6069 |
| 0.313 | 6.0 | 1602 | 0.4115 | 0.6238 |
| 0.313 | 7.0 | 1869 | 0.4154 | 0.6285 |
| 0.2764 | 8.0 | 2136 | 0.4200 | 0.6182 |
| 0.2764 | 9.0 | 2403 | 0.4288 | 0.5987 |
| 0.2463 | 10.0 | 2670 | 0.4286 | 0.6192 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Falah/falahgs_en-fr_books_model
|
Falah
| 2023-06-15T10:38:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-15T08:46:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: falahgs_en-fr_books_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 6.0805
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falahgs_en-fr_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5439
- Bleu: 6.0805
- Gen Len: 17.5565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.8416 | 1.0 | 6355 | 1.6150 | 5.5571 | 17.5985 |
| 1.7911 | 2.0 | 12710 | 1.5707 | 5.9025 | 17.5616 |
| 1.7539 | 3.0 | 19065 | 1.5492 | 6.0302 | 17.5599 |
| 1.7474 | 4.0 | 25420 | 1.5439 | 6.0805 | 17.5565 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
|
clapkong/my_awesome_qa_model
|
clapkong
| 2023-06-15T10:34:18Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-11T17:55:03Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: clapkong/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# clapkong/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.4663
- Validation Loss: 1.7343
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.3916 | 2.0376 | 0 |
| 1.6945 | 1.7343 | 1 |
| 1.4663 | 1.7343 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
moiduy04/q-FrozenLake-v1-4x4-noSlippery
|
moiduy04
| 2023-06-15T10:27:53Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T10:27:50Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="moiduy04/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
thackerhelik/rl_course_vizdoom_health_gathering_supreme
|
thackerhelik
| 2023-06-15T10:11:48Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T10:11:40Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.87 +/- 5.10
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r thackerhelik/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
jetro30087/vicuna-Wizard-7B-Uncensored-android-q3f16_0
|
jetro30087
| 2023-06-15T10:11:04Z | 0 | 2 | null |
[
"text-generation",
"en",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"license:other",
"region:us"
] |
text-generation
| 2023-06-15T08:22:54Z |
---
license: other
datasets:
- ehartford/wizard_vicuna_70k_unfiltered
language:
- en
pipeline_tag: text-generation
---
Model Card for vicuna-Wizard-android-7B-Uncensored-q3f16_0
Model Description
This model is for the Android version of MLC-LLM
PC/Linux version is here - (https://huggingface.co/jetro30087/vicuna-Wizard-7B-Uncensored-q3f16_0/blob/main/README.md)
This Language Model (vicuna-Wizard-7B-Uncensored-android-q3f16_0) is based on Facebook's "Llama" 7B parameter model, trained on the Wizard-Vicuna uncensored dataset under a non-commercial license. It was specifically developed and formatted for use within the MLC-LLM project, which you can find more details about at MLC-LLM project URL.
The model is designed for research and general text generation purposes. Thanks to MLC-LLM's Vulkan compatibility, the model is capable of working on both Nvidia and AMD graphics cards.
Model Usage
The vicuna-Wizard-7B-Uncensored-q3f16_0 model can generate human-like text that's useful for a variety of purposes, including but not limited to research, chatbots, writing aids, and more. You can use the model through MLC-LLM chat by copying it to the mlc-chat/dist folder of a compile MLC-Chat client.
Limitations and Bias
Although the model is capable of generating high-quality text, it is important to note that it is not perfect. Here are some potential limitations and biases:
Output quality: Although trained on a large dataset, the model may occasionally produce text that is nonsensical or does not align with the input prompt.
Biases in the data: The model has been trained on the Wizard-Vicuna uncensored dataset, and as such, it may have inherited biases present in this data. Despite our best efforts to minimize this, it may reflect biases in terms of gender, race, age, or other aspects.
Safety and content: The uncensored nature of the training dataset means that the model could potentially produce text that some people find offensive, inappropriate, or politically biased. We recommend using this model with care, especially in environments with young users or those who might be affected by such content.
Incorrect information: The model generates text based on patterns it learned during training and does not have access to real-world knowledge or updates beyond its training cut-off. As a result, the information it provides should always be verified for accuracy.
Ethical Considerations and Safety
While using this model, consider the following:
Always verify the information provided by the model with reliable external sources before using it to make decisions or for factual reference.
Monitor the output of the model for any potentially inappropriate or harmful content, especially if it is being used in a public or sensitive setting.
Keep in mind the potential biases inherited from the training data and account for these when interpreting the output.
Disclaimer
This model is provided as-is, and the developers make no warranties regarding its performance, appropriateness, or accuracy. Use it at your own risk.
license: othertions](https://mlc.ai/mlc-llm/docs/tutorials/runtime/cpp.html) for details.
|
Chaitanya14/flan-t5-base-finetuned-xsum
|
Chaitanya14
| 2023-06-15T09:59:30Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-15T09:46:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: flan-t5-base-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-finetuned-xsum
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 7 | nan |
| No log | 2.0 | 14 | nan |
| No log | 3.0 | 21 | nan |
| No log | 4.0 | 28 | nan |
| No log | 5.0 | 35 | nan |
| No log | 6.0 | 42 | nan |
| No log | 7.0 | 49 | nan |
| No log | 8.0 | 56 | nan |
| No log | 9.0 | 63 | nan |
| No log | 10.0 | 70 | nan |
| No log | 11.0 | 77 | nan |
| No log | 12.0 | 84 | nan |
| No log | 13.0 | 91 | nan |
| No log | 14.0 | 98 | nan |
| No log | 15.0 | 105 | nan |
| No log | 16.0 | 112 | nan |
| No log | 17.0 | 119 | nan |
| No log | 18.0 | 126 | nan |
| No log | 19.0 | 133 | nan |
| No log | 20.0 | 140 | nan |
| No log | 21.0 | 147 | nan |
| No log | 22.0 | 154 | nan |
| No log | 23.0 | 161 | nan |
| No log | 24.0 | 168 | nan |
| No log | 25.0 | 175 | nan |
| No log | 26.0 | 182 | nan |
| No log | 27.0 | 189 | nan |
| No log | 28.0 | 196 | nan |
| No log | 29.0 | 203 | nan |
| No log | 30.0 | 210 | nan |
| No log | 31.0 | 217 | nan |
| No log | 32.0 | 224 | nan |
| No log | 33.0 | 231 | nan |
| No log | 34.0 | 238 | nan |
| No log | 35.0 | 245 | nan |
| No log | 36.0 | 252 | nan |
| No log | 37.0 | 259 | nan |
| No log | 38.0 | 266 | nan |
| No log | 39.0 | 273 | nan |
| No log | 40.0 | 280 | nan |
| No log | 41.0 | 287 | nan |
| No log | 42.0 | 294 | nan |
| No log | 43.0 | 301 | nan |
| No log | 44.0 | 308 | nan |
| No log | 45.0 | 315 | nan |
| No log | 46.0 | 322 | nan |
| No log | 47.0 | 329 | nan |
| No log | 48.0 | 336 | nan |
| No log | 49.0 | 343 | nan |
| No log | 50.0 | 350 | nan |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Felix92/doctr-dummy-tf-vitstr-small
|
Felix92
| 2023-06-15T09:53:52Z | 2 | 0 |
transformers
|
[
"transformers",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-06-15T09:53:47Z |
---
language: en
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-vitstr-small
|
Felix92
| 2023-06-15T09:50:15Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-06-15T09:50:09Z |
---
language: en
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
pushkin05/a2c-AntBulletEnv-v0
|
pushkin05
| 2023-06-15T09:43:01Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T08:16:27Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 939.44 +/- 65.35
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.