modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 00:41:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 00:40:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Graphcore/wav2vec2-large-ipu
|
Graphcore
| 2023-07-07T11:02:01Z | 1 | 0 | null |
[
"optimum_graphcore",
"arxiv:2006.11477",
"license:apache-2.0",
"region:us"
] | null | 2023-04-11T19:09:02Z |
---
license: apache-2.0
---
# Graphcore/wav2vec2-large-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
From [wav2vec 2.0: A Framework for Self-Supervised
Learning of Speech Representations](https://arxiv.org/pdf/2006.11477v3.pdf),
“Wave2vec2 is a framework for self-supervised learning of speech representations. It masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned.”
## Intended uses & limitations
This model contains just the `IPUConfig` files for running the Wav2Vec2 large model (e.g. [wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large)) on Graphcore IPUs.
**This model contains no model weights, only an IPUConfig.**
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/wav2vec2-large-ipu")
```
|
dwmit/ja_classification_brl
|
dwmit
| 2023-07-07T10:57:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-07T09:16:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ja_classification_brl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ja_classification_brl
This model is a fine-tuned version of [dicta-il/BEREL_2.0](https://huggingface.co/dicta-il/BEREL_2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0712
- Precision: 0.9846
- Recall: 0.9846
- F1: 0.9846
- Accuracy: 0.9846
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 1125 | 0.0522 | 0.9819 | 0.9819 | 0.9819 | 0.9819 |
| No log | 2.0 | 2250 | 0.0490 | 0.9837 | 0.9837 | 0.9837 | 0.9837 |
| No log | 3.0 | 3375 | 0.0481 | 0.9843 | 0.9843 | 0.9843 | 0.9843 |
| No log | 4.0 | 4500 | 0.0514 | 0.9844 | 0.9844 | 0.9844 | 0.9844 |
| No log | 5.0 | 5625 | 0.0548 | 0.9848 | 0.9848 | 0.9848 | 0.9848 |
| No log | 6.0 | 6750 | 0.0587 | 0.9846 | 0.9846 | 0.9846 | 0.9846 |
| No log | 7.0 | 7875 | 0.0636 | 0.9844 | 0.9844 | 0.9844 | 0.9844 |
| No log | 8.0 | 9000 | 0.0669 | 0.9846 | 0.9846 | 0.9846 | 0.9846 |
| No log | 9.0 | 10125 | 0.0685 | 0.9844 | 0.9844 | 0.9844 | 0.9844 |
| No log | 10.0 | 11250 | 0.0712 | 0.9846 | 0.9846 | 0.9846 | 0.9846 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.0+cu117
- Datasets 2.11.0
- Tokenizers 0.11.6
|
idealflaw/q-FrozenLake-v1-4x4-noSlippery
|
idealflaw
| 2023-07-07T10:52:17Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T10:34:42Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="idealflaw/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Graphcore/roberta-base-squad2
|
Graphcore
| 2023-07-07T10:50:06Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"optimum_graphcore",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"arxiv:1907.11692",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-23T12:32:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-squad2
results: []
---
# Graphcore/roberta-base-squad2
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
RoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained.
It suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data.
As a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD.
Paper link : [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/pdf/1907.11692.pdf)
## Intended uses & limitations
This model is a fine-tuned version of [HuggingFace/roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset.
## Training and evaluation data
Trained and evaluated on the SQuAD v2 dataset:
- [HuggingFace/squad_v2](https://huggingface.co/datasets/squad_v2).
## Training procedure
Trained on 16 Graphcore Mk2 IPUs using [optimum-graphcore](https://github.com/huggingface/optimum-graphcore).
Command line:
```
python examples/question-answering/run_qa.py \
--ipu_config_name Graphcore/roberta-base-ipu \
--model_name_or_path roberta-base \
--dataset_name squad_v2 \
--version_2_with_negative \
--do_train \
--do_eval \
--num_train_epochs 3 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 2 \
--pod_type pod16 \
--learning_rate 7e-5 \
--max_seq_length 384 \
--doc_stride 128 \
--seed 1984 \
--lr_scheduler_type linear \
--loss_scaling 64 \
--weight_decay 0.01 \
--warmup_ratio 0.2 \
--logging_steps 1 \
--save_steps -1 \
--dataloader_num_workers 64 \
--output_dir roberta-base-squad2 \
--overwrite_output_dir \
--push_to_hub
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 1984
- distributed_type: IPU
- total_train_batch_size: 256
- total_eval_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 3.0
- training precision: Mixed Precision
### Training results
```
***** train metrics *****
epoch = 3.0
train_loss = 0.9982
train_runtime = 0:04:44.21
train_samples = 131823
train_samples_per_second = 1391.43
train_steps_per_second = 5.425
***** eval metrics *****
epoch = 3.0
eval_HasAns_exact = 78.1208
eval_HasAns_f1 = 84.6569
eval_HasAns_total = 5928
eval_NoAns_exact = 82.0353
eval_NoAns_f1 = 82.0353
eval_NoAns_total = 5945
eval_best_exact = 80.0809
eval_best_exact_thresh = 0.0
eval_best_f1 = 83.3442
eval_best_f1_thresh = 0.0
eval_exact = 80.0809
eval_f1 = 83.3442
eval_samples = 12165
eval_total = 11873
```
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Graphcore/roberta-base-squad
|
Graphcore
| 2023-07-07T10:49:47Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"optimum_graphcore",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"arxiv:1907.11692",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-23T00:03:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: Graphcore/roberta-base-squad
results: []
---
# Graphcore/roberta-base-squad
BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
## Model description
RoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained.
It suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data.
As a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD.
Paper link : [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/pdf/1907.11692.pdf)
## Intended uses & limitations
This model is a fine-tuned version of [HuggingFace/roberta-base](https://huggingface.co/roberta-base) on the SQuAD dataset.
## Training and evaluation data
Trained and evaluated on the SQuAD dataset:
- [HuggingFace/squad ](https://huggingface.co/datasets/squad).
## Training procedure
Trained on 16 Graphcore Mk2 IPUs using [optimum-graphcore](https://github.com/huggingface/optimum-graphcore).
Command line:
```
python examples/question-answering/run_qa.py \
--ipu_config_name Graphcore/roberta-base-ipu \
--model_name_or_path roberta-base \
--dataset_name squad \
--do_train \
--do_eval \
--num_train_epochs 2 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 2 \
--pod_type pod16 \
--learning_rate 6e-5 \
--max_seq_length 384 \
--doc_stride 128 \
--seed 1984 \
--lr_scheduler_type linear \
--loss_scaling 64 \
--weight_decay 0.01 \
--warmup_ratio 0.25 \
--logging_steps 1 \
--save_steps -1 \
--dataloader_num_workers 64 \
--output_dir squad_roberta_base \
--overwrite_output_dir \
--push_to_hub
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 1984
- distributed_type: IPU
- total_train_batch_size: 256
- total_eval_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 2.0
- training precision: Mixed Precision
### Training results
```
***** train metrics *****
epoch = 2.0
train_loss = 1.2528
train_runtime = 0:02:14.50
train_samples = 88568
train_samples_per_second = 1316.952
train_steps_per_second = 5.13
***** eval metrics *****
epoch = 2.0
eval_exact_match = 85.2696
eval_f1 = 91.7455
eval_samples = 10790
```
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Graphcore/roberta-base-ipu
|
Graphcore
| 2023-07-07T10:49:29Z | 64 | 1 | null |
[
"optimum_graphcore",
"arxiv:1907.11692",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: apache-2.0
---
# Graphcore/roberta-base-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
RoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained.
It suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data.
As a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD.
Paper link : [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/pdf/1907.11692.pdf)
## Intended uses & limitations
This model contains just the `IPUConfig` files for running the [roberta-base](https://huggingface.co/roberta-base) model on Graphcore IPUs.
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/roberta-base-ipu")
```
|
Graphcore/lxmert-vqa-uncased
|
Graphcore
| 2023-07-07T10:48:38Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"optimum_graphcore",
"lxmert",
"question-answering",
"generated_from_trainer",
"dataset:Graphcore/vqa-lxmert",
"arxiv:1908.07490",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-25T10:10:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- Graphcore/vqa-lxmert
metrics:
- accuracy
model-index:
- name: vqa
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: Graphcore/vqa-lxmert
type: Graphcore/vqa-lxmert
args: vqa
metrics:
- name: Accuracy
type: accuracy
value: 0.7242196202278137
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Graphcore/lxmert-vqa-uncased
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
LXMERT is a transformer model for learning vision-and-language cross-modality representations. It has a Transformer model that has three encoders: object relationship encoder, a language encoder, and a cross-modality encoder. It is pretrained via a combination of masked language modelling, visual-language text alignment, ROI-feature regression, masked visual-attribute modelling, masked visual-object modelling, and visual-question answering objectives. It achieves the state-of-the-art results on VQA and GQA.
Paper link : [LXMERT: Learning Cross-Modality Encoder Representations from Transformers](https://arxiv.org/pdf/1908.07490.pdf)
## Intended uses & limitations
This model is a fine-tuned version of [unc-nlp/lxmert-base-uncased](https://huggingface.co/unc-nlp/lxmert-base-uncased) on the [Graphcore/vqa-lxmert](https://huggingface.co/datasets/Graphcore/vqa-lxmert) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
- Accuracy: 0.7242
## Training and evaluation data
- [Graphcore/vqa-lxmert](https://huggingface.co/datasets/Graphcore/vqa-lxmert) dataset
## Training procedure
Trained on 16 Graphcore Mk2 IPUs using [optimum-graphcore](https://github.com/huggingface/optimum-graphcore).
Command line:
```
python examples/question-answering/run_vqa.py \
--model_name_or_path unc-nlp/lxmert-base-uncased \
--ipu_config_name Graphcore/lxmert-base-ipu \
--dataset_name Graphcore/vqa-lxmert \
--do_train \
--do_eval \
--max_seq_length 512 \
--per_device_train_batch_size 1 \
--num_train_epochs 4 \
--dataloader_num_workers 64 \
--logging_steps 5 \
--learning_rate 5e-5 \
--lr_scheduler_type linear \
--loss_scaling 16384 \
--weight_decay 0.01 \
--warmup_ratio 0.1 \
--output_dir /tmp/vqa/ \
--dataloader_drop_last \
--replace_qa_head \
--pod_type pod16
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: IPU
- total_train_batch_size: 64
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4.0
- training precision: Mixed Precision
### Training results
```
***** train metrics *****
"epoch": 4.0,
"train_loss": 0.0060005393999575125,
"train_runtime": 13854.802,
"train_samples": 443757,
"train_samples_per_second": 128.116,
"train_steps_per_second": 2.002
***** eval metrics *****
"eval_accuracy": 0.7242196202278137,
"eval_loss": 0.0008745193481445312,
"eval_samples": 214354,
```
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Graphcore/lxmert-base-ipu
|
Graphcore
| 2023-07-07T10:48:18Z | 13 | 1 | null |
[
"optimum_graphcore",
"arxiv:1908.07490",
"region:us"
] | null | 2022-03-11T17:45:10Z |
# Graphcore/lxmert-base-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
LXMERT is a transformer model for learning vision-and-language cross-modality representations. It has a Transformer model that has three encoders: object relationship encoder, a language encoder, and a cross-modality encoder. It is pretrained via a combination of masked language modelling, visual-language text alignment, ROI-feature regression, masked visual-attribute modelling, masked visual-object modelling, and visual-question answering objectives. It achieves the state-of-the-art results on VQA and GQA.
Paper link : [LXMERT: Learning Cross-Modality Encoder Representations from Transformers](https://arxiv.org/pdf/1908.07490.pdf)
## Intended uses & limitations
This model contains just the `IPUConfig` files for running the [lxmert-base-uncased](https://huggingface.co/unc-nlp/lxmert-base-uncased) model on Graphcore IPUs.
**This model contains no model weights, only an IPUConfig.**
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/lxmert-base-ipu")
```
|
Graphcore/distilroberta-base-ipu
|
Graphcore
| 2023-07-07T10:48:00Z | 2 | 0 | null |
[
"optimum_graphcore",
"arxiv:1907.11692",
"license:apache-2.0",
"region:us"
] | null | 2023-03-29T12:19:25Z |
---
license: apache-2.0
---
# Graphcore/distilroberta-base-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
This model is a distilled version of the [RoBERTa-base model](https://arxiv.org/abs/1907.11692).
## Intended uses & limitations
This model contains just the `IPUConfig` files for running the [distilroberta-base](https://huggingface.co/distilroberta-base) model on Graphcore IPUs.
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/distilroberta-base-ipu")
```
|
xian79/ml-agetn-SnowballTarget
|
xian79
| 2023-07-07T10:43:56Z | 18 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-07T10:43:55Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: xian79/ml-agetn-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
arunboss/test_triage
|
arunboss
| 2023-07-07T10:36:19Z | 213 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:arunboss/test",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-06T06:51:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_triage
results: []
datasets:
- arunboss/test
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_triage
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the Test dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9758
- Accuracy: 0.5008
## Model description
This is a basic skin disease recognition model without the specific disease information for now. I just wanted to test the platform for hosting capabilities and check other features.
## Intended uses & limitations
For now, its just a test environment. We have the basic pipeline of data & processing in place to push to this place. Future use will be to open source the dataset and allow the community to fine tune the skin identification and triaging module for broader and free-for-all in commercial/non-commercial usage.
## Training and evaluation data
We have a lot of open & closed datasets that have been compiled over years and annotated.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.3471 | 1.0 | 151 | 3.2152 | 0.2452 |
| 2.7313 | 2.0 | 303 | 2.5291 | 0.3817 |
| 2.48 | 3.0 | 454 | 2.2459 | 0.4413 |
| 2.2192 | 4.0 | 606 | 2.0968 | 0.4702 |
| 2.0479 | 5.0 | 757 | 2.0026 | 0.4897 |
| 1.9702 | 5.98 | 906 | 1.9758 | 0.5008 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
BadreddineHug/donut-base-ocr11
|
BadreddineHug
| 2023-07-07T10:12:25Z | 74 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-07T09:28:43Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-ocr11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-ocr11
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nomsgadded/textual_inversion_shark
|
nomsgadded
| 2023-07-07T10:01:05Z | 36 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-07T08:40:14Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - nomsgadded/textual_inversion_shark
These are textual inversion adaption weights for CompVis/stable-diffusion-v1-4. You can find some example images in the following.
|
Arup-Dutta-Bappy/bert-large-uncased-finetuned-squad
|
Arup-Dutta-Bappy
| 2023-07-07T09:42:01Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-04T10:31:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-large-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-squad
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aronmal/ppo-SnowballTarget
|
aronmal
| 2023-07-07T09:35:17Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-07T09:35:14Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: aronmal/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KINGeorge2000/sentiment_roberta_yu
|
KINGeorge2000
| 2023-07-07T09:31:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-23T05:49:16Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: sentiment_roberta_yu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_roberta_yu
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2580
- Accuracy: 0.6668
- F1: 0.6668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
arc-r/faster-whisper-large-v2-Ko
|
arc-r
| 2023-07-07T09:17:06Z | 46 | 10 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"ko",
"region:us"
] |
automatic-speech-recognition
| 2023-07-07T06:20:32Z |
---
language:
- ko
tags:
- audio
- automatic-speech-recognition
library_name: ctranslate2
---
# whisper-large-v2-Ko model for CTranslate2
This repository contains the conversion of [byoussef/whisper-large-v2-Ko](https://huggingface.co/byoussef/whisper-large-v2-Ko) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("arc-r/faster-whisper-large-v2-Ko")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model byoussef/whisper-large-v2-Ko --output_dir faster-whisper-large-v2-Ko \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/byoussef/whisper-large-v2-Ko).**
|
hungtu/rare-puppers
|
hungtu
| 2023-07-07T09:16:54Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-07T09:16:46Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8787878751754761
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu

|
Uminosachi/realisticVisionV30_v30VAE-inpainting
|
Uminosachi
| 2023-07-07T09:15:20Z | 35 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-03T23:54:35Z |
---
license: creativeml-openrail-m
---
This is an inpainting model, which has been converted from the [realisticVisionV30_v30VAE-inpainting](https://civitai.com/models/4201?modelVersionId=105723).
|
Uminosachi/Deliberate-inpainting
|
Uminosachi
| 2023-07-07T09:13:29Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-03T12:23:02Z |
---
license: creativeml-openrail-m
---
This is an inpainting model, which has been converted from the [Deliberate-inpainting](https://huggingface.co/XpucT/Deliberate).
|
AustinCarthy/Benign10MGPT2_suffix_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
|
AustinCarthy
| 2023-07-07T09:12:55Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-07-07T07:00:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_suffix_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_suffix_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_Benign10MGPT2_using_phish_95K_top_p_0.75suffix dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0279
- Accuracy: 0.9973
- F1: 0.9713
- Precision: 0.9952
- Recall: 0.9486
- Roc Auc Score: 0.9742
- Tpr At Fpr 0.01: 0.9312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.012 | 1.0 | 21554 | 0.0211 | 0.9957 | 0.9527 | 0.9897 | 0.9184 | 0.9590 | 0.8804 |
| 0.0061 | 2.0 | 43108 | 0.0153 | 0.9974 | 0.9720 | 0.9923 | 0.9526 | 0.9761 | 0.9194 |
| 0.0027 | 3.0 | 64662 | 0.0132 | 0.9970 | 0.9674 | 0.9939 | 0.9422 | 0.9710 | 0.9008 |
| 0.0021 | 4.0 | 86216 | 0.0212 | 0.9975 | 0.9731 | 0.9913 | 0.9556 | 0.9776 | 0.9046 |
| 0.0 | 5.0 | 107770 | 0.0279 | 0.9973 | 0.9713 | 0.9952 | 0.9486 | 0.9742 | 0.9312 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Uminosachi/dreamshaper_6Inpainting
|
Uminosachi
| 2023-07-07T09:09:39Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-31T13:22:17Z |
---
license: creativeml-openrail-m
---
This is an inpainting model, which has been converted from the [dreamshaper_6Inpainting](https://civitai.com/models/4384?modelVersionId=81594).
|
arc-r/faster-whisper-large-v2-jp
|
arc-r
| 2023-07-07T09:09:09Z | 13 | 2 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"ja",
"region:us"
] |
automatic-speech-recognition
| 2023-07-07T06:16:06Z |
---
language:
- ja
tags:
- audio
- automatic-speech-recognition
library_name: ctranslate2
---
# whisper-large-v2-jp model for CTranslate2
This repository contains the conversion of [vumichien/whisper-large-v2-jp](https://huggingface.co/vumichien/whisper-large-v2-jp) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("arc-r/faster-whisper-large-v2-jp")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model vumichien/whisper-large-v2-jp --output_dir faster-whisper-large-v2-jp \
--quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/vumichien/whisper-large-v2-jp).**
|
Uminosachi/dreamshaper_631Inpainting
|
Uminosachi
| 2023-07-07T09:07:59Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-28T05:53:06Z |
---
license: creativeml-openrail-m
---
This is an inpainting model, which has been converted from the [dreamshaper_631Inpainting](https://civitai.com/models/4384?modelVersionId=95087).
|
aroot/eng-mya-simcse_longest_ssrl
|
aroot
| 2023-07-07T08:48:40Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T08:27:24Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_longest_ssrl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_longest_ssrl
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8495
- Bleu: 4.1358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
TheBloke/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2-GGML
|
TheBloke
| 2023-07-07T08:45:36Z | 10 | 14 |
transformers
|
[
"transformers",
"falcon",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"region:us"
] | null | 2023-06-23T20:28:10Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
datasets:
- OpenAssistant/oasst1
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# H2O's GPT-GM-OASST1-Falcon 40B v2 GGML
These files are GGML format model files for [H2O's GPT-GM-OASST1-Falcon 40B v2](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2).
These GGML files will **not** work in llama.cpp, text-generation-webui or KoboldCpp.
They can be used from:
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui).
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers).
* A new fork of llama.cpp that introduced this new Falcon GGML support: [cmp-nc/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp).
## Prompt template
```
<|prompt|>prompt<|endoftext|>
<|answer|>
```
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2)
<!-- compatibility_ggml start -->
## Compatibility
The recommended UI for these GGMLs is [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). Preliminary CUDA GPU acceleration is provided.
For use from Python code, use [ctransformers](https://github.com/marella/ctransformers). Again, with preliminary CUDA GPU acceleration
Or to build cmp-nct's fork of llama.cpp with Falcon 7B support plus preliminary CUDA acceleration, please try the following steps:
```
git clone https://github.com/cmp-nct/ggllm.cpp
cd ggllm.cpp
rm -rf build && mkdir build && cd build && cmake -DGGML_CUBLAS=1 .. && cmake --build . --config Release
```
Compiling on Windows: developer cmp-nct notes: 'I personally compile it using VScode. When compiling with CUDA support using the Microsoft compiler it's essential to select the "Community edition build tools". Otherwise CUDA won't compile.'
Once compiled you can then use `bin/falcon_main` just like you would use llama.cpp. For example:
```
bin/falcon_main -t 8 -ngl 100 -b 1 -m h2ogpt-falcon-40b.ggmlv3.q3_k.bin -p "What is a falcon?\n### Response:"
```
You can specify `-ngl 100` regardles of your VRAM, as it will automatically detect how much VRAM is available to be used.
Adjust `-t 8` (the number of CPU cores to use) according to what performs best on your system. Do not exceed the number of physical CPU cores you have.
`-b 1` reduces batch size to 1. This slightly lowers prompt evaluation time, but frees up VRAM to load more of the model on to your GPU. If you find prompt evaluation too slow and have enough spare VRAM, you can remove this parameter.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| h2ogpt-falcon-40b.ggmlv3.q2_k.bin | q2_k | 2 | 13.74 GB | 16.24 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| h2ogpt-falcon-40b.ggmlv3.q3_k.bin | q3_k | 3 | 17.98 GB | 20.48 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| h2ogpt-falcon-40b.ggmlv3.q4_0.bin | q4_0 | 4 | 23.54 GB | 26.04 GB | Old quant method, 4-bit. |
| h2ogpt-falcon-40b.ggmlv3.q4_1.bin | q4_1 | 4 | 26.16 GB | 28.66 GB | Old quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| h2ogpt-falcon-40b.ggmlv3.q4_k.bin | q4_k | 4 | 23.54 GB | 26.04 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| h2ogpt-falcon-40b.ggmlv3.q5_0.bin | q5_0 | 5 | 28.77 GB | 31.27 GB | Old quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| h2ogpt-falcon-40b.ggmlv3.q5_1.bin | q5_1 | 5 | 31.38 GB | 33.88 GB | Old quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| h2ogpt-falcon-40b.ggmlv3.q5_k.bin | q5_k | 5 | 28.77 GB | 31.27 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| h2ogpt-falcon-40b.ggmlv3.q6_k.bin | q6_k | 6 | 34.33 GB | 36.83 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| h2ogpt-falcon-40b.ggmlv3.q8_0.bin | q8_0 | 8 | 44.46 GB | 46.96 GB | Old quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Mano Prime, Fen Risland, Derek Yates, Preetika Verma, webtim, Sean Connelly, Alps Aficionado, Karl Bernard, Junyu Yang, Nathan LeClaire, Chris McCloskey, Lone Striker, Asp the Wyvern, Eugene Pentland, Imad Khwaja, trip7s trip, WelcomeToTheClub, John Detwiler, Artur Olbinski, Khalefa Al-Ahmad, Trenton Dambrowitz, Talal Aujan, Kevin Schuppel, Luke Pendergrass, Pyrater, Joseph William Delisle, terasurfer , vamX, Gabriel Puliatti, David Flickinger, Jonathan Leane, Iucharbius , Luke, Deep Realms, Cory Kujawski, ya boyyy, Illia Dulskyi, senxiiz, Johann-Peter Hartmann, John Villwock, K, Ghost , Spiking Neurons AB, Nikolai Manek, Rainer Wilmers, Pierre Kircher, biorpg, Space Cruiser, Ai Maven, subjectnull, Willem Michiel, Ajan Kanaga, Kalila, chris gileta, Oscar Rangel.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: H2O's GPT-GM-OASST1-Falcon 40B v2
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [tiiuae/falcon-40b](https://huggingface.co/tiiuae/falcon-40b)
- Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install bitsandbytes==0.39.0
pip install accelerate==0.19.0
pip install torch==2.0.0
pip install einops==0.6.1
```
```python
import torch
from transformers import pipeline, BitsAndBytesConfig, AutoTokenizer
model_kwargs = {}
quantization_config = None
# optional quantization
quantization_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0,
)
model_kwargs["quantization_config"] = quantization_config
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
tokenizer=tokenizer,
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
model_kwargs=model_kwargs,
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = None
# optional quantization
quantization_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0,
)
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
trust_remote_code=True,
torch_dtype=torch.float16,
device_map={"": "cuda:0"},
quantization_config=quantization_config
).eval()
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
quantization_config = None
# optional quantization
quantization_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0,
)
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
trust_remote_code=True,
torch_dtype=torch.float16,
device_map={"": "cuda:0"},
quantization_config=quantization_config
).eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
RWForCausalLM(
(transformer): RWModel(
(word_embeddings): Embedding(65024, 8192)
(h): ModuleList(
(0-59): 60 x DecoderLayer(
(ln_attn): LayerNorm((8192,), eps=1e-05, elementwise_affine=True)
(ln_mlp): LayerNorm((8192,), eps=1e-05, elementwise_affine=True)
(self_attention): Attention(
(maybe_rotary): RotaryEmbedding()
(query_key_value): Linear(in_features=8192, out_features=9216, bias=False)
(dense): Linear(in_features=8192, out_features=8192, bias=False)
(attention_dropout): Dropout(p=0.0, inplace=False)
)
(mlp): MLP(
(dense_h_to_4h): Linear(in_features=8192, out_features=32768, bias=False)
(act): GELU(approximate='none')
(dense_4h_to_h): Linear(in_features=32768, out_features=8192, bias=False)
)
)
)
(ln_f): LayerNorm((8192,), eps=1e-05, elementwise_affine=True)
)
(lm_head): Linear(in_features=8192, out_features=65024, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
TheBloke/falcon-40b-sft-mix-1226-GGML
|
TheBloke
| 2023-07-07T08:45:12Z | 5 | 11 |
transformers
|
[
"transformers",
"falcon",
"sft",
"en",
"de",
"es",
"fr",
"dataset:OpenAssistant/oasst1",
"dataset:databricks/databricks-dolly-15k",
"license:apache-2.0",
"region:us"
] | null | 2023-07-04T23:32:03Z |
---
license: apache-2.0
language:
- en
- de
- es
- fr
tags:
- sft
inference: false
datasets:
- OpenAssistant/oasst1
- databricks/databricks-dolly-15k
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Open Assistant's Falcon 40B SFT MIX GGML
These files are GGCC format model files for [Open Assistant's Falcon 40B SFT MIX](https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226).
These files will **not** work in llama.cpp, text-generation-webui or KoboldCpp.
GGCC is a new format created in a new fork of llama.cpp that introduced this new Falcon GGML-based support: [cmp-nc/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp).
Currently these files will also not work with code that previously supported Falcon, such as LoLLMs Web UI and ctransformers. But support should be added soon.
## Repositories available
* [2, 3, 4, 5, 6, 8-bit GGCC models for CPU+GPU inference](https://huggingface.co/TheBloke/falcon-40b-sft-mix-1226-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226)
## Prompt template
```
<|prompter|>prompt<|endoftext|><|assistant|>
```
<!-- compatibility_ggml start -->
## Compatibility
To build cmp-nct's fork of llama.cpp with Falcon support plus CUDA acceleration, please try the following steps:
```
git clone https://github.com/cmp-nct/ggllm.cpp
cd ggllm.cpp
rm -rf build && mkdir build && cd build && cmake -DGGML_CUBLAS=1 .. && cmake --build . --config Release
```
Compiling on Windows: developer cmp-nct notes: 'I personally compile it using VScode. When compiling with CUDA support using the Microsoft compiler it's essential to select the "Community edition build tools". Otherwise CUDA won't compile.'
Once compiled you can then use `bin/falcon_main` just like you would use llama.cpp. For example:
```
bin/falcon_main -t 8 -ngl 100 -b 1 -m falcon-40b-sft-mix-1226.ggccv1.q4_K.bin -p "<|prompter|>write a story about llamas<|endoftext|><|assistant|>"
```
You can specify `-ngl 100` regardles of your VRAM, as it will automatically detect how much VRAM is available to be used.
Adjust `-t 8` (the number of CPU cores to use) according to what performs best on your system. Do not exceed the number of physical CPU cores you have.
`-b 1` reduces batch size to 1. This slightly lowers prompt evaluation time, but frees up VRAM to load more of the model on to your GPU. If you find prompt evaluation too slow and have enough spare VRAM, you can remove this parameter.
Please see https://github.com/cmp-nct/ggllm.cpp for further details and instructions.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| falcon-40b-sft-mix-1226.ggccv1.q2_K.bin | q2_K | 2 | 13.74 GB | 16.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| falcon-40b-sft-mix-1226.ggccv1.q3_K.bin | q3_K_S | 3 | 17.98 GB | 20.48 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| falcon-40b-sft-mix-1226.ggccv1.q4_K.bin | q4_K_S | 4 | 23.54 GB | 26.04 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| falcon-40b-sft-mix-1226.ggccv1.q5_K.bin | q5_K_S | 5 | 28.77 GB | 31.27 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| falcon-40b-sft-mix-1226.ggccv1.q6_K.bin | q6_K | 6 | 34.33 GB | 36.83 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| falcon-40b-sft-mix-1226.ggccv1.q8_0.bin | q8_0 | 8 | 44.46 GB | 46.96 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Spiking Neurons AB, Kevin Schuppel, Cory Kujawski, senxiiz, Luke Pendergrass, John Villwock, Ghost , Alex , Sean Connelly, Space Cruiser, Eugene Pentland, Pyrater, Matthew Berman, Dave, Derek Yates, Jonathan Leane, Viktor Bowallius, Michael Levine, Joseph William Delisle, Fred von Graf, Asp the Wyvern, Nikolai Manek, Pierre Kircher, webtim, K, RoA, Karl Bernard, Artur Olbinski, Rainer Wilmers, Ai Maven, Nathan LeClaire, Ajan Kanaga, Stephen Murray, Edmond Seymore, zynix , Imad Khwaja, John Detwiler, Randy H, subjectnull, Alps Aficionado, Greatston Gnanesh, Trenton Dambrowitz, Junyu Yang, Raven Klaugh, biorpg, Deep Realms, vamX, Talal Aujan, Johann-Peter Hartmann, WelcomeToTheClub, Chris McCloskey, Luke, chris gileta, terasurfer , Iucharbius , Preetika Verma, Willem Michiel, Fen Risland, SuperWojo, Khalefa Al-Ahmad, Daniel P. Andersen, Gabriel Puliatti, Illia Dulskyi, Willian Hasse, Oscar Rangel, ya boyyy, Mano Prime, Lone Striker, Kalila
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Open Assistant's Falcon 40B SFT MIX
# Open-Assistant Falcon 40B SFT MIX Model
This model is a fine-tuning of TII's [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) LLM.
It was trained on a mixture of OASST top-2 threads (exported on June 2, 2023), Dolly-15k and synthetic instruction datasets (see dataset configuration below).
## Model Details
- **Finetuned from:** [tiiuae/falcon-40b]((https://huggingface.co/tiiuae/falcon-40b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-05_OpenAssistant_falcon-40b-sft-mix-1226_sampling_noprefix2.json), [multiligual-60](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-05_OpenAssistant_falcon-40b-sft-mix-1226_multilingual_noprefix2.json)
- **Eval results:** [ilm-eval](https://tju01.github.io/ilm-eval/)
- **Weights & Biases**: [Training log](https://wandb.ai/open-assistant/public-sft/runs/feplc450) (checkpoint: 1226 steps)
- **License:** Apache 2.0
- **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
## Prompting
Two special tokens are used to mark the beginning of user and assistant turns:
`<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
Input prompt example:
```
<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
```
The input ends with the `<|assistant|>` token to signal that the model should
start generating the assistant reply.
## Configuration Details
Model:
```
falcon-40b:
dtype: bf16
learning_rate: 1e-5
model_name: "tiiuae/falcon-40b"
deepspeed_config: configs/zero3_config_falcon.json
weight_decay: 0.0
max_length: 2048
warmup_steps: 20
gradient_checkpointing: true
gradient_accumulation_steps: 1
per_device_train_batch_size: 18
per_device_eval_batch_size: 10
eval_steps: 120
save_strategy: steps
save_steps: 613
num_train_epochs: 8
save_total_limit: 4
use_flash_attention: false
residual_dropout: 0.3
residual_dropout_lima: true
```
Dataset:
```
sft9-stage2:
# oasst_export: 100.00% (29899)
# vicuna: 50.00% (16963)
# code_alpaca: 50.00% (9510)
# oa_wiki_qa_bart_10000row: 100.00% (9434)
# grade_school_math_instructions: 100.00% (8351)
# dolly15k: 100.00% (14250)
use_custom_sampler: true
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0
input_file_path: 2023-06-02_oasst_all_labels.jsonl.gz
val_split: 0.05
top_k: 2
- vicuna:
fraction: 0.5
val_split: 0.025
max_val_set: 250
- code_alpaca:
fraction: 0.5
val_split: 0.05
max_val_set: 250
- oa_wiki_qa_bart_10000row:
val_split: 0.05
max_val_set: 250
- grade_school_math_instructions:
val_split: 0.05
- dolly15k:
val_split: 0.05
max_val_set: 300
```
|
XSarchitectural/XSarchitecturalV3Commercialbuildingrendering
|
XSarchitectural
| 2023-07-07T08:42:55Z | 54 | 2 |
diffusers
|
[
"diffusers",
"architecture",
"architectural",
"design",
"stable-diffusion",
"text-to-image",
"en",
"license:other",
"region:us"
] |
text-to-image
| 2023-07-07T08:17:12Z |
---
license: other
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- architecture
- architectural
- design
- stable-diffusion
---
|
KJan05/Pixelcopter-PLE-v0
|
KJan05
| 2023-07-07T08:31:22Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T08:30:39Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 11.70 +/- 11.12
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
soduhh/mt5-small-finetuned-amazon-en-fr
|
soduhh
| 2023-07-07T08:30:20Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-07T07:02:53Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: soduhh/mt5-small-finetuned-amazon-en-fr
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# soduhh/mt5-small-finetuned-amazon-en-fr
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.9132
- Validation Loss: 3.2661
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 11184, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.1676 | 4.1323 | 0 |
| 5.6798 | 3.6659 | 1 |
| 4.9731 | 3.5322 | 2 |
| 4.5665 | 3.4177 | 3 |
| 4.2967 | 3.3513 | 4 |
| 4.1126 | 3.3000 | 5 |
| 3.9828 | 3.2671 | 6 |
| 3.9132 | 3.2661 | 7 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
irfan62622/Reinforce-pixelcopter
|
irfan62622
| 2023-07-07T08:26:21Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T08:25:13Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 15.10 +/- 15.86
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Abzu/mpt-7b-storywriter-q8
|
Abzu
| 2023-07-07T08:24:17Z | 147 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"dataset:the_pile_books3",
"arxiv:2108.12409",
"arxiv:2205.14135",
"arxiv:2302.06675",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] |
text-generation
| 2023-07-07T08:21:23Z |
---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
datasets:
- the_pile_books3
inference: false
---
# MPT-7B-StoryWriter-65k+
MPT-7B-StoryWriter-65k+ is a model designed to read and write fictional stories with super long context lengths.
It was built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 84k tokens on a single node of 8 A100-80GB GPUs in our [blogpost](https://www.mosaicml.com/blog/mpt-7b).
* License: Apache 2.0
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
Apache 2.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-storywriter',
trust_remote_code=True
)
```
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-storywriter'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048 and finetuned with a sequence length of 65536,
ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 83968 # (input + output) tokens can now be up to 83968
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Community-Created Resources
These were not created by MosaicML, but you may find them useful. These links are not an endorsement of the creators or their content.
- [Oobabooga Running MPT-7B-Storywriter](https://youtu.be/QVVb6Md6huA)
- [NEW MPT-7B-StoryWriter CRUSHES GPT-4!](https://www.youtube.com/watch?v=O9Y_ZdsuKWQ&t=649s) - Has a long section on running locally using Oobabooga
## Example Epilogue
The full text of the _The Great Gatsby_ (67873 tokens) was fed to the model, followed by the text "EPILOGUE"
**TheRealFitzgerald233**:
> ...
>
> So we beat on, boats against the current, borne back ceaselessly into the past
>
> EPILOGUE
**MPT-7B-StoryWriter**:
>I have lived so long in the shadow of this dream that I may never see the bright sunlight of reality. I may look up some morning and find that I am six feet under, and that Gatsby, who gave me the dream and everything else, will have vanished behind me.
>
>And so it was. I had hardly crossed the threshold of the old world when my eye caught the headlines of a newspaper, and I knew at once that my dream was over. I knew that Gatsby was dead.
>
>And I knew that day that I didn't love her.
>
>That night I took the last train for West Egg, and when I reached home I went directly to my attic and brought down all Gatsby's letters.
>
>I have sorted them by date and placed them in the order in which they were written. They are very few in number.
>
>They are dated between April of nineteen eighteen and January of nineteen nineteen, and the last one I have here is the one that told me he was dead.
>
>There is something in that last letter that frightens me. It is the sense of a finality, of a completeness. I never heard of him after that, and I was glad.
>
>But a few months ago I read somewhere that he had been seen in New York. He had become a great man.
>
>And I knew that he had not changed at all.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | **65536** |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 8 A100-80GBs for about 2 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-StoryWriter can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-StoryWriter was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Alex Trott and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
poopostresearch/dark_elf_test
|
poopostresearch
| 2023-07-07T08:19:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-07T08:14:45Z |
RVC model trained on dumer voices from morrowind. 300 epochs
|
insub/bert-finetuned-squad
|
insub
| 2023-07-07T08:17:35Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-07T07:43:51Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Abzu/mpt-7b-chat-q8
|
Abzu
| 2023-07-07T08:16:54Z | 145 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"dataset:jeffwan/sharegpt_vicuna",
"dataset:Hello-SimpleAI/HC3",
"dataset:tatsu-lab/alpaca",
"dataset:Anthropic/hh-rlhf",
"dataset:victor123/evol_instruct_70k",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] |
text-generation
| 2023-07-07T08:13:47Z |
---
license: cc-by-nc-sa-4.0
datasets:
- jeffwan/sharegpt_vicuna
- Hello-SimpleAI/HC3
- tatsu-lab/alpaca
- Anthropic/hh-rlhf
- victor123/evol_instruct_70k
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
---
# MPT-7B-Chat
MPT-7B-Chat is a chatbot-like model for dialogue generation.
It was built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3),
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets.
* License: _CC-By-NC-SA-4.0_ (non-commercial use only)
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
_CC-By-NC-SA-4.0_ (non-commercial use only)
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Dialogue
**SamIAm85**:
>I want you to come up with a tweet based on this summary of the article:
>"Introducing MPT-7B, the latest entry in our MosaicML Foundation Series.
>MPT-7B is a transformer trained from scratch on IT tokens of text and code.
>It is open source and it matches the quality of LLaMA-7B.
>MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k.
>Starting today, you can train, finetune, and deploy your own private MPT models, either starting from one of our checkpoints or training from scratch.
>For inspiration, we are also releasing three finetuned models in addition to the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-Story Writer-65k+, the last of which uses a context length of 65k tokens!
**MPT-7B-Instruct**:
>Here's my suggestion for a tweet:
>"Exciting news! @MosaicML has launched #MPT_7B - a new standard for open-source LLMs. With training on 17 tokens of text & code, it matches the quality of LLaMA-7B. Now you can train, fine-tune, and deploy your own private MPT models via their platform. Check out the full blog post here: https://bit.ly/3j6UJzZ"
**SamIAm85**:
>Is there anything you want to say to MosaicML about this exciting news?
**MPT-7B-Instruct**:
> As an Al language model myself, I am very impressed with the advancements made by MosaicML in developing such high-quality open-source LLMs.
> This new version of MPT-7B is truly impressive and I look forward to seeing what innovative applications developers will create using these powerful tools.
> Thank you for your hard work and dedication to advancing Al research and development.
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-chat',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-chat'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-chat'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
### Training Configuration
This model was trained on 8 A100-80GBs for about 8.2 hours, followed by training for 6.7 hours on 32 A100-40GBs using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Chat can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Chat was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source,
ly Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
aroot/eng-guj-simcse_longestplus_usrl
|
aroot
| 2023-07-07T08:15:14Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T07:53:43Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_longestplus_usrl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_longestplus_usrl
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2755
- Bleu: 2.8744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DXD-FYP/Covid-19
|
DXD-FYP
| 2023-07-07T08:11:35Z | 0 | 0 |
fastai
|
[
"fastai",
"image-classification",
"region:us"
] |
image-classification
| 2023-07-07T07:38:02Z |
---
pipeline_tag: image-classification
library_name: fastai
---
|
Abzu/mpt-7b-instruct-q8
|
Abzu
| 2023-07-07T08:10:56Z | 148 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"dataset:mosaicml/dolly_hhrlhf",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] |
text-generation
| 2023-07-07T08:07:38Z |
---
license: cc-by-sa-3.0
datasets:
- mosaicml/dolly_hhrlhf
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
---
# MPT-7B-Instruct
MPT-7B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Longboi24**:
> What is a quoll?
**MPT-7B-Instruct**:
>A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted in the dolly-15k format:
```python
INSTRUCTION_KEY = "### Instruction:"
RESPONSE_KEY = "### Response:"
INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request."
PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
example = "James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? Explain before answering."
fmt_ex = PROMPT_FOR_GENERATION_FORMAT.format(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 8 A100-40GBs for about 2.3 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
aroot/eng-guj-simcse_longest_usrl
|
aroot
| 2023-07-07T08:07:08Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T07:49:34Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_longest_usrl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_longest_usrl
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2223
- Bleu: 2.9152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Jiranuwat/topic_model
|
Jiranuwat
| 2023-07-07T08:06:31Z | 7 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-07-07T08:06:27Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# topic_model
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("Jiranuwat/topic_model")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 110
* Number of training documents: 4771
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| 0 | โซน - ปัสสาวะ - ผ่าตัด - อาการ - ยา | 340 | 0_ โซน_ปัสสาวะ_ผ่าตัด_อาการ |
| 1 | ยา - ภูมิแพ้ - อาการ - โรค - อาหาร | 195 | 1_ยา_ภูมิแพ้_อาการ_โรค |
| 2 | enewsletter - healthcare - ชีววัฒนะ - สำนัก - intelligence | 95 | 2_enewsletter_healthcare_ชีววัฒนะ_สำนัก |
| 3 | iop - โรงพยาบาลเวิลด - ์เมดิคอล - ตา - โทร | 134 | 3_iop_โรงพยาบาลเวิลด_์เมดิคอล_ตา |
| 4 | อาหาร - รับประทาน - จุลินทรีย์ - น้ำมัน - ผลไม้ | 90 | 4_อาหาร_รับประทาน_จุลินทรีย์_น้ำมัน |
| 5 | ศูนย์หูคอ - เสียง - จมูก - infobangkokhospitalcom - ผ่าตัด | 103 | 5_ศูนย์หูคอ_เสียง_จมูก_infobangkokhospitalcom |
| 6 | คน - วาร์เดนเบิร์ก - บูลลี่ - รัก - ทำ | 100 | 6_คน_วาร์เดนเบิร์ก_บูลลี่_รัก |
| 7 | ครรภ์ - แม่ - คลอด - คลอดเหมา - ทารก | 86 | 7_ครรภ์_แม่_คลอด_คลอดเหมา |
| 8 | ซักครู่ - รีเทนเนอร์ - พระประแดง - เก๊าต์ - โรงพยาบาลเปาโลพหลโยธิน | 71 | 8_ ซักครู่_รีเทนเนอร์_พระประแดง_เก๊าต์ |
| 9 | วิ่ง - กิโลเมตร - แข่งขัน - มาราธอน - กล้ามเนื้อ | 74 | 9_วิ่ง_กิโลเมตร_แข่งขัน_มาราธอน |
| 10 | โฮม - ผู้สูงอายุ - ดูแล - ชรา - iconfacebooktwitter | 60 | 10_โฮม_ผู้สูงอายุ_ดูแล_ชรา |
| 11 | แพ็กเกจโปรโมชั่น - โรงพยาบาลพริ้นซ์ - โรงพยาบาลพริ้นซ์สุวรรณภูมิ - menu - ศูนย์การแพทย์ | 67 | 11_แพ็กเกจโปรโมชั่น_โรงพยาบาลพริ้นซ์_โรงพยาบาลพริ้นซ์สุวรรณภูมิ_menu |
| 12 | อาหาร - wine - ffi - ไวน์ - ร่างกาย | 82 | 12_อาหาร_wine_ffi_ไวน์ |
| 13 | วัคซีน - ไข้หวัดใหญ่ - ฉีด - โควิด - เชื้อ | 71 | 13_วัคซีน_ไข้หวัดใหญ่_ฉีด_โควิด |
| 14 | เหมา - เด็ก - ลูก - เด็ก - จ่าย | 59 | 14_เหมา_เด็ก_ลูก_ เด็ก |
| 15 | ธันวาคม - kasetline - พยาบาลเปาโลเกษตร - paolokaset - ฟัน | 52 | 15_ ธันวาคม_kasetline_พยาบาลเปาโลเกษตร_paolokaset |
| 16 | เต้านม - แมมโมแกรม - มะเร็ง - ปากมดลูก - ซีสต์ | 48 | 16_เต้านม_แมมโมแกรม_มะเร็ง_ปากมดลูก |
| 17 | ผ่าตัด - นิ่ว - ฝีคัณฑสูตร - ถุงน้ำ - ศัลยกรรม | 90 | 17_ผ่าตัด_นิ่ว_ฝีคัณฑสูตร_ถุงน้ำ |
| 18 | ตกลง - นามสกุล - ตกลง - consider - elements | 46 | 18_ ตกลง_นามสกุล_ตกลง_consider |
| 19 | เด็ก - ลูก - แม่ - พ่อ - พ่อแม่ | 71 | 19_เด็ก_ลูก_แม่_พ่อ |
| 20 | views - share - ผู้สูงอายุ - ดูแล - social | 45 | 20_views_ share_ผู้สูงอายุ_ดูแล |
| 21 | หัวใจ - เต้น - จังหวะ - หัวใจ - มาติก | 45 | 21_หัวใจ_เต้น_จังหวะ_ หัวใจ |
| 22 | แพ็กเกจโปรโมชั่น - โรงพยาบาลพริ้นซ์ - โรงพยาบาลพริ้นซ์สุวรรณภูมิ - ศูนย์การแพทย์ - menu | 44 | 22_แพ็กเกจโปรโมชั่น_โรงพยาบาลพริ้นซ์_โรงพยาบาลพริ้นซ์สุวรรณภูมิ_ศูนย์การแพทย์ |
| 23 | ข้อเข่า - โคลิค - ถั่ง - หาว - เช่า | 50 | 23_ข้อเข่า_โคลิค_ถั่ง_หาว |
| 24 | ครรภ์ - อสุจิ - ทารก - คลอด - แม่ | 41 | 24_ครรภ์_อสุจิ_ทารก_คลอด |
| 25 | แพ็กเกจโปรโมชั่น - โรงพยาบาลพริ้นซ์ - โรงพยาบาลพริ้นซ์สุวรรณภูมิ - ศูนย์การแพทย์ - menu | 41 | 25_แพ็กเกจโปรโมชั่น_โรงพยาบาลพริ้นซ์_โรงพยาบาลพริ้นซ์สุวรรณภูมิ_ศูนย์การแพทย์ |
| 26 | เวร - ทวารเทียม - feather - โรงพยาบาลจุฬาภรณ์ - facelift | 61 | 26_เวร_ทวารเทียม_feather_ โรงพยาบาลจุฬาภรณ์ |
| 27 | ผิว - wmc - เล็บ - สิว - ต้อ | 47 | 27_ผิว_wmc_เล็บ_สิว |
| 28 | อสุจิ - ไข่ - มีบุตรยาก - ์เมดิคอล - เวิลด | 44 | 28_อสุจิ_ไข่_มีบุตรยาก_์เมดิคอล |
| 29 | แพ็กเกจโปรโมชั่น - โรงพยาบาลพริ้นซ์ - โรงพยาบาลพริ้นซ์สุวรรณภูมิ - ศูนย์การแพทย์ - menu | 40 | 29_แพ็กเกจโปรโมชั่น_โรงพยาบาลพริ้นซ์_โรงพยาบาลพริ้นซ์สุวรรณภูมิ_ศูนย์การแพทย์ |
| 30 | wmc - โรงพยาบาลเวิลด - ์เมดิคอล - drip - วัฒนะ | 47 | 30_wmc_โรงพยาบาลเวิลด_์เมดิคอล_drip |
| 31 | สล็อต - เชื้อ - วัคซีน - ติดต่อ - ทรู | 98 | 31_สล็อต_เชื้อ_วัคซีน_ ติดต่อ |
| 32 | บาดเจ็บ - กีฬา - basem - ฟุตบอล - ไหล่ | 38 | 32_บาดเจ็บ_กีฬา_basem_ฟุตบอล |
| 33 | ไทรอยด์ - เบาหวาน - น้ำตา - อินซูลิน - พรุน | 44 | 33_ไทรอยด์_เบาหวาน_น้ำตา_อินซูลิน |
| 34 | ตา - จอตา - ต้อ - วุ้น - เปลือก | 38 | 34_ตา_จอตา_ต้อ_วุ้น |
| 35 | แพ็กเกจโปรโมชั่น - โรงพยาบาลพริ้นซ์ - โรงพยาบาลพริ้นซ์สุวรรณภูมิ - ศูนย์การแพทย์ - menu | 37 | 35_แพ็กเกจโปรโมชั่น_โรงพยาบาลพริ้นซ์_โรงพยาบาลพริ้นซ์สุวรรณภูมิ_ศูนย์การแพทย์ |
| 36 | แพ็กเกจโปรโมชั่น - โรงพยาบาลพริ้นซ์ - โรงพยาบาลพริ้นซ์สุวรรณภูมิ - ศูนย์การแพทย์ - menu | 36 | 36_แพ็กเกจโปรโมชั่น_โรงพยาบาลพริ้นซ์_โรงพยาบาลพริ้นซ์สุวรรณภูมิ_ศูนย์การแพทย์ |
| 37 | expat - กระดูก - email - ข่าวสาร - สัน | 36 | 37_expat_กระดูก_email_ข่าวสาร |
| 38 | facebooktwitterline - รู้สู้ - เต้านม - แมม - โมแกรม | 35 | 38_ facebooktwitterline_รู้สู้_เต้านม_แมม |
| 39 | เชื้อ - เรียกิน - สะอาด - แมว - กาฬโรค | 79 | 39_เชื้อ_เรียกิน_สะอาด_แมว |
| 40 | ข้อเข่า - เข่า - เอ็นไขว้ - หัวเข่า - เสื่อม | 45 | 40_ข้อเข่า_เข่า_เอ็นไขว้_หัวเข่า |
| 41 | พอร์ต - ตับ - เลื่อน - นิ่ว - ผ่าตัด | 50 | 41_พอร์ต_ตับ_เลื่อน_นิ่ว |
| 42 | เด็ก - bmcpedbghcoth - อนุบาล - เจ้าตัว - ศูนย์กุมารเวช | 32 | 42_เด็ก_bmcpedbghcoth_อนุบาล_เจ้าตัว |
| 43 | expat - สะโพก - email - ข่าวสาร - เทียม | 31 | 43_expat_สะโพก_email_ข่าวสาร |
| 44 | แพ็กเกจโปรโมชั่น - โรงพยาบาลพริ้นซ์ - โรงพยาบาลพริ้นซ์สุวรรณภูมิ - menu - ศูนย์การแพทย์ | 31 | 44_แพ็กเกจโปรโมชั่น_โรงพยาบาลพริ้นซ์_โรงพยาบาลพริ้นซ์สุวรรณภูมิ_menu |
| 45 | แพ็กเกจโปรโมชั่น - โรงพยาบาลพริ้นซ์ - โรงพยาบาลพริ้นซ์สุวรรณภูมิ - ศูนย์การแพทย์ - menu | 31 | 45_แพ็กเกจโปรโมชั่น_โรงพยาบาลพริ้นซ์_โรงพยาบาลพริ้นซ์สุวรรณภูมิ_ศูนย์การแพทย์ |
| 46 | นอน - หลับ - เซิร์ฟ - สเก็ต - เมลาโทนิน | 54 | 46_นอน_หลับ_เซิร์ฟ_สเก็ต |
| 47 | สมอง - พิราบ - mri - อะเฟเซีย - tia | 30 | 47_สมอง_พิราบ_ mri_อะเฟเซีย |
| 48 | expat - สัน - กระดูก - ข่าวสาร - email | 43 | 48_expat_สัน_กระดูก_ข่าวสาร |
| 49 | พยาบาลเปาโลรังสิต - ตกขาว - พยาบาลเปาโลโชคชัย - นิ่วทอนซิล - โรง | 38 | 49_พยาบาลเปาโลรังสิต_ตกขาว_พยาบาลเปาโลโชคชัย_นิ่วทอนซิล |
| 50 | expat - สัน - กระดูก - หลังคด - ข่าวสาร | 31 | 50_expat_สัน_กระดูก_หลังคด |
| 51 | ครรภ์ - คุมกำเนิด - ประจำเดือน - คลอด - pcos | 58 | 51_ครรภ์_คุมกำเนิด_ประจำเดือน_คลอด |
| 52 | expat - กระดูก - email - ข่าวสาร - protected | 30 | 52_expat_กระดูก_email_ข่าวสาร |
| 53 | ฝีคัณฑสูตร - เลื่อน - ทวาร - ริด - ริดสี | 30 | 53_ฝีคัณฑสูตร_เลื่อน_ทวาร_ริด |
| 54 | แพ็กเกจโปรโมชั่น - โรงพยาบาลพริ้นซ์ - โรงพยาบาลพริ้นซ์สุวรรณภูมิ - ศูนย์การแพทย์ - menu | 27 | 54_แพ็กเกจโปรโมชั่น_โรงพยาบาลพริ้นซ์_โรงพยาบาลพริ้นซ์สุวรรณภูมิ_ศูนย์การแพทย์ |
| 55 | tms - ศูนย์จิตรักษ์ - ซึมเศร้า - บูลลี่ - transcranial | 27 | 55_tms_ศูนย์จิตรักษ์_ซึมเศร้า_บูลลี่ |
| 56 | cloudflare - protection - enable - addresses - website | 26 | 56_cloudflare_protection_enable_addresses |
| 57 | สิว - ผิว - ชัชชาติ - แดด - ซิกแพค | 52 | 57_สิว_ผิว_ชัชชาติ_แดด |
| 58 | ไหล่ - กอล์ฟ - กีฬา - สถาบันเวชศาสตร์การ - แข่งขัน | 28 | 58_ไหล่_กอล์ฟ_กีฬา_สถาบันเวชศาสตร์การ |
| 59 | ข้อเข่า - ศูนย์ศัลยกรรมออร์โธปิดิกส์ - นพเกรียงศักดิ์เล็กเครือสุวรรณ - กระดูก - ปวด | 26 | 59_ข้อเข่า_ศูนย์ศัลยกรรมออร์โธปิดิกส์_นพเกรียงศักดิ์เล็กเครือสุวรรณ_กระดูก |
| 60 | พนัน - ไบโพลาร์ - ขอด - หัวใจ - เลือด | 60 | 60_พนัน_ไบโพลาร์_ขอด_หัวใจ |
| 61 | design - healthy - bdms - ตรวจ - preventive | 25 | 61_design_healthy_bdms_ตรวจ |
| 62 | ตับ - พอก - ไวรัส - ไขมัน - บี | 24 | 62_ตับ_พอก_ไวรัส_ไขมัน |
| 63 | expat - เทียม - สะโพก - ข่าวสาร - email | 31 | 63_expat_เทียม_สะโพก_ข่าวสาร |
| 64 | betflix - สล็อต - pg - สูตรสล็อต - สแกนสล็อต | 24 | 64_betflix_สล็อต_pg_สูตรสล็อต |
| 65 | มะเฟือง - ฟลูออไรด์ - ลำ - อาหาร - ไส้ | 49 | 65_มะเฟือง_ฟลูออไรด์_ลำ_อาหาร |
| 66 | expat - กระดูก - สัน - ข่าวสาร - cgrp | 28 | 66_expat_กระดูก_สัน_ข่าวสาร |
| 67 | ศูนย์ดูแล - อัลไซเมอร์ - เดอะซีเนียร์ - โฮม - senior | 23 | 67_ศูนย์ดูแล_อัลไซเมอร์_เดอะซีเนียร์_โฮม |
| 68 | httpslineecbthx - l - โรงพยาบาลบางปะกอก - httpswwwtiktokcombangpakokhospitalinstagram - httpswwwfacebookcombangpakoktiktok | 24 | 68_httpslineecbthx_ l_โรงพยาบาลบางปะกอก_httpswwwtiktokcombangpakokhospitalinstagram |
| 69 | ตัวอ่อน - ครรภ์ - มีบุตรยาก - ไข่ - โครโมโซม | 39 | 69_ตัวอ่อน_ครรภ์_มีบุตรยาก_ไข่ |
| 70 | ไส้ - ลำ - ibs - ท้องผูก - ท้อง | 23 | 70_ไส้_ลำ_ibs_ท้องผูก |
| 71 | kaset - hot - มดลูก - paolokaset - เดินทาง | 32 | 71_kaset_hot_มดลูก_paolokaset |
| 72 | ปอด - เล่ม - ลม - หายใจ - copd | 30 | 72_ปอด_เล่ม_ลม_หายใจ |
| 73 | หัวใจ - ซักครู่ - เลือด - tcd - est | 22 | 73_หัวใจ_ ซักครู่_เลือด_tcd |
| 74 | ผู้สูงอายุ - longevity - bangkoklongevitycenterbangkokhospitalcom - อายุรวัฒน์ - หกล้ม | 19 | 74_ผู้สูงอายุ_longevity_bangkoklongevitycenterbangkokhospitalcom_อายุรวัฒน์ |
| 75 | ลองโควิด - โควิด - สำลัก - covid - isolation | 24 | 75_ลองโควิด_โควิด_สำลัก_covid |
| 76 | ฟัน - ครอบฟัน - invisalign - วีเนียร์ - ราก | 18 | 76_ฟัน_ครอบฟัน_invisalign_วีเนียร์ |
| 77 | คลอด - ครรภ์ - hpv - ผู้หญิง - เสาร์อาทิตย์ | 22 | 77_คลอด_ครรภ์_hpv_ผู้หญิง |
| 78 | พนักงา - องค์กร - บริษัทอีเมล - infopetcharavejcom - petcharavej | 17 | 78_พนักงา_องค์กร_บริษัทอีเมล_infopetcharavejcom |
| 79 | มะเร็ง - ไฝ - อาร์เอช - rh - เป็นมะเร็ง | 38 | 79_มะเร็ง_ไฝ_อาร์เอช_rh |
| 80 | ไมโครเวฟ - ชาเขียว - อาหาร - มลพิษ - โครเมียม | 46 | 80_ไมโครเวฟ_ชาเขียว_อาหาร_มลพิษ |
| 81 | elbow - tennis - ศอก - ข้อ - กระดูก | 37 | 81_elbow_tennis_ศอก_ข้อ |
| 82 | วัคซีน - เชื้อ - ไวรัส - เดินทาง - ฉีด | 44 | 82_วัคซีน_เชื้อ_ไวรัส_เดินทาง |
| 83 | chulabhorn - คลิป - cra - link - ข่าว | 16 | 83_chulabhorn_คลิป_cra_link |
| 84 | วิตา - เลซิติน - มิน - sinopharm - ข้อ | 33 | 84_วิตา_เลซิติน_มิน_sinopharm |
| 85 | design - healthy - ตรงใจ - ตะคริว - สุขภาพดี | 16 | 85_design_healthy_ตรงใจ_ตะคริว |
| 86 | พยาบาลเปาโลสมุทรปราการ - design - officer - healthy - heatstroke | 15 | 86_พยาบาลเปาโลสมุทรปราการ_design_officer_healthy |
| 87 | กรกฎาคม - ริด - กระดูก - กรกฏาคม - หัก | 15 | 87_ กรกฎาคม_ริด_กระดูก_กรกฏาคม |
| 88 | officer - พยาบาลเปาโลสมุทรปราการ - theme - bdms - หัวใจ | 14 | 88_officer_พยาบาลเปาโลสมุทรปราการ_theme_bdms |
| 89 | ฟัน - อุดฟัน - ผุ - เหงือก - ซี่ | 14 | 89_ฟัน_อุดฟัน_ผุ_เหงือก |
| 90 | value - งู - สวัด - life - ncds | 13 | 90_value_งู_สวัด_ life |
| 91 | ไอบีเอส - กระเพาะ - ย้อน - ไหล - เอาหาร | 13 | 91_ไอบีเอส_กระเพาะ_ย้อน_ไหล |
| 92 | อนุบาล - ออทิสติก - เด็ก - พัฒนาการ - โกรธ | 13 | 92_อนุบาล_ออทิสติก_เด็ก_พัฒนาการ |
| 93 | covid - หวัด - วัคซีน - ไวรัสโคโรนา - ฉีด | 15 | 93_covid_หวัด_วัคซีน_ไวรัสโคโรนา |
| 94 | อ้วน - น้ำตา - เบาหวาน - อลิกซินโดรม - ไขมัน | 37 | 94_อ้วน_น้ำตา_เบาหวาน_อลิกซินโดรม |
| 95 | islands - saint - guinea - republic - franc | 12 | 95_islands_saint_guinea_republic |
| 96 | เบาหวาน - ภาวะกร - ขัตฤกษ์ - น้ำตา - ถั่ง | 12 | 96_เบาหวาน_ภาวะกร_ขัตฤกษ์_น้ำตา |
| 97 | เต้านม - เกิดสินธ์ชัย - หญิงตรีทิพย์ - ศูนย์รักษ์เต้านม - แดด | 19 | 97_เต้านม_เกิดสินธ์ชัย_หญิงตรีทิพย์_ศูนย์รักษ์เต้านม |
| 98 | kidney - ไต - ปัสสาวะ - กรวย - gfr | 18 | 98_kidney_ไต_ปัสสาวะ_กรวย |
| 99 | เทนนิส - เสิร์ฟ - วิ่ง - บาดเจ็บ - กีฬา | 42 | 99_เทนนิส_เสิร์ฟ_วิ่ง_บาดเจ็บ |
| 100 | ขลิบ - ปากมดลูก - อุ้ง - ปลาย - คลอด | 25 | 100_ขลิบ_ปากมดลูก_อุ้ง_ปลาย |
| 101 | พยาบาลเปาโลสมุทรปราการ - เด็ก - ฉลาด - theme - กุมารเวช | 20 | 101_พยาบาลเปาโลสมุทรปราการ_เด็ก_ฉลาด_theme |
| 102 | icl - สายตา - relex - เลนส์ - smile | 11 | 102_icl_สายตา_relex_เลนส์ |
| 103 | บริจาค - ตา - ต้อ - สายตา - เปลือก | 12 | 103_บริจาค_ตา_ต้อ_สายตา |
| 104 | ประคับประคอง - port - palliative - cath - พอร์ต | 11 | 104_ประคับประคอง_port_palliative_cath |
| 105 | โฮม - httpswwwmylucknursinghomecomourservices - มายลักษณ์เนอร์สซิ่ง - ศูนย์ดูแล - ผู้สูงอายุ | 10 | 105_โฮม_httpswwwmylucknursinghomecomourservices_มายลักษณ์เนอร์สซิ่ง_ศูนย์ดูแล |
| 106 | ใหล - zone - heart - หัวใจ - httpwwwcvriskcalculatorcom | 11 | 106_ใหล_zone_heart_หัวใจ |
| 107 | ขอด - เลือด - ดำ - หลอด - ขลิบ | 16 | 107_ขอด_เลือด_ดำ_หลอด |
| 108 | tavi - สัน - ซีเมนต์ - กระดูก - ข้อ | 27 | 108_tavi_สัน_ซีเมนต์_กระดูก |
| 109 | เลื่อน - vo - ขริบ - max - ไหล่ | 12 | 109_เลื่อน_vo_ขริบ_max |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 5)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
## Framework versions
* Numpy: 1.23.5
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 2.0.0
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.30.2
* Numba: 0.57.1
* Plotly: 5.15.0
* Python: 3.11.4
|
Redamancy2299/dreambooth2
|
Redamancy2299
| 2023-07-07T07:59:44Z | 6 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-20T08:23:40Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: A photo of a young people sleeping in front of a computer
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Redamancy2299/dreambooth2
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on A photo of a young people sleeping in front of a computer using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
KJan05/ppo-SnowballTarget
|
KJan05
| 2023-07-07T07:59:35Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-06T10:37:37Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: KJan05/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pineiden/nominal-groups-recognition-bert-base-spanish-wwm-cased
|
pineiden
| 2023-07-07T07:49:19Z | 108 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"es",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-06T22:01:05Z |
---
language:
- es
tags:
- generated_from_trainer
model-index:
- name: nominal-groups-recognition-bert-base-spanish-wwm-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nominal-groups-recognition-bert-base-spanish-wwm-cased
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3247
- Body Part Precision: 0.7066
- Body Part Recall: 0.7288
- Body Part F1: 0.7175
- Body Part Number: 413
- Disease Precision: 0.7316
- Disease Recall: 0.7662
- Disease F1: 0.7485
- Disease Number: 975
- Family Member Precision: 0.8333
- Family Member Recall: 0.8333
- Family Member F1: 0.8333
- Family Member Number: 30
- Medication Precision: 0.8148
- Medication Recall: 0.7097
- Medication F1: 0.7586
- Medication Number: 93
- Procedure Precision: 0.6419
- Procedure Recall: 0.6399
- Procedure F1: 0.6409
- Procedure Number: 311
- Overall Precision: 0.7163
- Overall Recall: 0.7344
- Overall F1: 0.7252
- Overall Accuracy: 0.9201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4301 | 1.0 | 1004 | 0.3018 | 0.6054 | 0.7094 | 0.6533 | 413 | 0.6988 | 0.7067 | 0.7027 | 975 | 0.8889 | 0.8 | 0.8421 | 30 | 0.8025 | 0.6989 | 0.7471 | 93 | 0.5795 | 0.4920 | 0.5322 | 311 | 0.6645 | 0.6718 | 0.6681 | 0.9052 |
| 0.2384 | 2.0 | 2008 | 0.2903 | 0.6983 | 0.6949 | 0.6966 | 413 | 0.7402 | 0.7159 | 0.7278 | 975 | 0.8 | 0.8 | 0.8000 | 30 | 0.7283 | 0.7204 | 0.7243 | 93 | 0.6026 | 0.6045 | 0.6035 | 311 | 0.7069 | 0.6937 | 0.7003 | 0.9148 |
| 0.1625 | 3.0 | 3012 | 0.2948 | 0.6653 | 0.7603 | 0.7096 | 413 | 0.7412 | 0.7374 | 0.7393 | 975 | 0.9231 | 0.8 | 0.8571 | 30 | 0.8313 | 0.7419 | 0.7841 | 93 | 0.5789 | 0.6720 | 0.6220 | 311 | 0.6982 | 0.7327 | 0.7151 | 0.9188 |
| 0.1142 | 4.0 | 4016 | 0.3247 | 0.7066 | 0.7288 | 0.7175 | 413 | 0.7316 | 0.7662 | 0.7485 | 975 | 0.8333 | 0.8333 | 0.8333 | 30 | 0.8148 | 0.7097 | 0.7586 | 93 | 0.6419 | 0.6399 | 0.6409 | 311 | 0.7163 | 0.7344 | 0.7252 | 0.9201 |
| 0.0858 | 5.0 | 5020 | 0.3583 | 0.6996 | 0.7554 | 0.7264 | 413 | 0.7451 | 0.7436 | 0.7444 | 975 | 0.8333 | 0.8333 | 0.8333 | 30 | 0.8375 | 0.7204 | 0.7746 | 93 | 0.5976 | 0.6495 | 0.6225 | 311 | 0.7129 | 0.7305 | 0.7216 | 0.9180 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
shaunkyn/sd_webui_LoRa
|
shaunkyn
| 2023-07-07T07:43:46Z | 0 | 1 | null |
[
"license:unknown",
"region:us"
] | null | 2023-05-25T05:43:09Z |
---
license: unknown
---
Source:
https://civitai.com/models/18095/chinese-bmale-likeness
https://civitai.com/models/44922/oc-illustration
https://civitai.com/models/47859?modelVersionId=64536
https://civitai.com/models/43132/oppa Trigger Words:OPPAV3
https://civitai.com/models/18224/cryptopunks Trigger Words:art by punks_sd
American Comic Style LoRa:
https://civitai.com/models/22912/bored-ape-yacht-club-lora
https://civitai.com/models/54127/sbahj-comics-homestuck
https://civitai.com/models/41417/steamed-diffusion
https://civitai.com/models/17361/peanuts-comics-art-style
https://civitai.com/models/20606/modern-american-comics-style-1
|
nolanaatama/shrkmfbkhllv1stgnrvcv2300pchsyy5
|
nolanaatama
| 2023-07-07T07:43:31Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-07T07:40:09Z |
---
license: creativeml-openrail-m
---
|
AntonyG/fine-tune-wav2vec2-large-xls-r-1b-sw
|
AntonyG
| 2023-07-07T07:35:56Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-20T06:25:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: fine-tune-wav2vec2-large-xls-r-1b-sw
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: sw
split: test[:1%]
args: sw
metrics:
- name: Wer
type: wer
value: 0.5834348355663824
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tune-wav2vec2-large-xls-r-300m-sw
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_11_0 swahili dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2834
- Wer: 0.5834
## Model description
This model is fine-tuned for general swahili speech recognition tasks. You can watch our hour long [webinar](https://drive.google.com/file/d/1OkLx3d9xivdyxH8yYsZtwObhEX5Ptn5y/view?usp=drive_link) and see the [slides](https://docs.google.com/presentation/d/1sExJLwZLMNMKGnpuxy-ttF5KqDXJyKK2jNNTUabo5_Q/edit?usp=sharing) on this work.
## Intended uses & limitations
The intention is to transcribe general swahili speeches. With further development, we'll fine-tune the model for domain-specific (we are focused on hospital tasks) swahili conversations.
## Training and evaluation data
To appreciate the transformation we did on the data, you can read our [blog on data preparation](https://medium.com/@gitau_am/from-raw-data-to-accurate-speech-recognition-asr-my-journey-of-data-preparation-df3a1b0dee3a).
## Training procedure
We also [documented](https://medium.com/@gitau_am/exploring-asr-model-development-fine-tuning-xls-r-wav2vec2-model-with-swahili-data-b95134d116b8) some lessons from the fine-tuning exercise.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 9
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.72 | 200 | 3.0092 | 1.0 |
| 4.1305 | 3.43 | 400 | 2.9159 | 1.0 |
| 4.1305 | 5.15 | 600 | 1.4301 | 0.7040 |
| 0.9217 | 6.87 | 800 | 1.3143 | 0.6529 |
| 0.9217 | 8.58 | 1000 | 1.2834 | 0.5834 |
### Framework versions
- Transformers 4.27.0
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
aroot/eng-fra-simcse_longest_usrl
|
aroot
| 2023-07-07T07:35:54Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T07:16:50Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_longest_usrl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longest_usrl
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1250
- Bleu: 32.6481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_longestplus_ssrl
|
aroot
| 2023-07-07T07:35:27Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T07:16:34Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_longestplus_ssrl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longestplus_ssrl
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1355
- Bleu: 32.4402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_longest_ssrl
|
aroot
| 2023-07-07T07:32:30Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T07:13:28Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_longest_ssrl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longest_ssrl
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1282
- Bleu: 32.2561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dvinagre/wav2vec2-lg-xlsr-en-speech-emotion-recognition-finetuned-gtzan
|
dvinagre
| 2023-07-07T07:21:12Z | 33 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-26T09:22:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: wav2vec2-lg-xlsr-en-speech-emotion-recognition-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-lg-xlsr-en-speech-emotion-recognition-finetuned-gtzan
This model is a fine-tuned version of [ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition](https://huggingface.co/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7145
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9771 | 1.0 | 225 | 1.7112 | 0.48 |
| 1.0169 | 2.0 | 450 | 1.1513 | 0.62 |
| 0.7104 | 3.0 | 675 | 0.8799 | 0.7 |
| 1.5425 | 4.0 | 900 | 0.7419 | 0.8 |
| 0.2908 | 5.0 | 1125 | 0.6713 | 0.8 |
| 0.8275 | 6.0 | 1350 | 0.6961 | 0.84 |
| 0.0298 | 7.0 | 1575 | 0.8689 | 0.82 |
| 0.0163 | 8.0 | 1800 | 0.7662 | 0.86 |
| 0.0162 | 9.0 | 2025 | 0.7143 | 0.88 |
| 0.2649 | 10.0 | 2250 | 0.7145 | 0.88 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YakovElm/Qt_20_BERT_More_Properties
|
YakovElm
| 2023-07-07T07:15:22Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T07:14:48Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt_20_BERT_More_Properties
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt_20_BERT_More_Properties
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2099
- Train Accuracy: 0.9462
- Validation Loss: 0.1746
- Validation Accuracy: 0.9586
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2317 | 0.9373 | 0.1722 | 0.9586 | 0 |
| 0.2121 | 0.9462 | 0.1721 | 0.9586 | 1 |
| 0.2099 | 0.9462 | 0.1746 | 0.9586 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lushuai726/testmodel
|
lushuai726
| 2023-07-07T07:13:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-07T07:09:27Z |
This is the model card
-----
#test test
|
Bugsys0302/POVBGV2
|
Bugsys0302
| 2023-07-07T07:03:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-07T06:59:06Z |
---
license: creativeml-openrail-m
---
|
aroot/eng-mya-simcse_longest_usrb
|
aroot
| 2023-07-07T07:02:53Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T06:41:21Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_longest_usrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_longest_usrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8539
- Bleu: 4.2285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
SergiyC/ppo-LunarLander-v2
|
SergiyC
| 2023-07-07T07:01:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T07:01:13Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 247.16 +/- 38.23
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YojitShinde/Q-Taxi-v3
|
YojitShinde
| 2023-07-07T07:01:32Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T07:01:28Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="YojitShinde/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AustinCarthy/Benign10MGPT2_domain_100KP_BFall_fromP_90K_topP_0.75_ratio5
|
AustinCarthy
| 2023-07-07T06:59:41Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-07-07T03:33:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_domain_100KP_BFall_fromP_90K_topP_0.75_ratio5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_domain_100KP_BFall_fromP_90K_topP_0.75_ratio5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_Benign10MGPT2_using_phish_95K_top_p_0.75domain dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0229
- Accuracy: 0.9976
- F1: 0.9748
- Precision: 0.9962
- Recall: 0.9542
- Roc Auc Score: 0.9770
- Tpr At Fpr 0.01: 0.9358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.008 | 1.0 | 35625 | 0.0214 | 0.9961 | 0.9572 | 0.9983 | 0.9194 | 0.9597 | 0.9208 |
| 0.0059 | 2.0 | 71250 | 0.0239 | 0.9959 | 0.9557 | 0.9963 | 0.9182 | 0.9590 | 0.8816 |
| 0.0041 | 3.0 | 106875 | 0.0247 | 0.9968 | 0.9651 | 0.9955 | 0.9364 | 0.9681 | 0.9088 |
| 0.0001 | 4.0 | 142500 | 0.0260 | 0.9971 | 0.9687 | 0.9962 | 0.9426 | 0.9712 | 0.9298 |
| 0.0011 | 5.0 | 178125 | 0.0229 | 0.9976 | 0.9748 | 0.9962 | 0.9542 | 0.9770 | 0.9358 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
arham061/auto_complete_distilgpt2_financeAlpacca
|
arham061
| 2023-07-07T06:58:54Z | 130 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-06T11:07:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: auto_complete_distilgpt2_financeAlpacca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# auto_complete_distilgpt2_financeAlpacca
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
squeeze-ai-lab/sq-opt-6.7b-w4-s50
|
squeeze-ai-lab
| 2023-07-07T06:58:29Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-07T05:50:45Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit quantized OPT 6.7B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 6.7B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0.5%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
aroot/eng-mya-simcse_longestplus_usrb
|
aroot
| 2023-07-07T06:58:10Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T06:37:16Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_longestplus_usrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_longestplus_usrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8846
- Bleu: 4.2095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-mya-simcse_longestplus_ssrb
|
aroot
| 2023-07-07T06:58:00Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T06:36:57Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_longestplus_ssrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_longestplus_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8875
- Bleu: 4.1475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
squeeze-ai-lab/sq-opt-6.7b-w3-s50
|
squeeze-ai-lab
| 2023-07-07T06:57:38Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-07T05:50:43Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit quantized OPT 6.7B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 6.7B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0.5%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
aroot/eng-mya-simcse_longest_ssrb
|
aroot
| 2023-07-07T06:54:46Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T06:33:36Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_longest_ssrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_longest_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8458
- Bleu: 4.2571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Multi-Domain-Expert-Learning/given-mpt-7b
|
Multi-Domain-Expert-Learning
| 2023-07-07T06:50:23Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-24T17:15:33Z |
---
license: apache-2.0
inference: false
---
# Given-MPT-7B
This is a merge of the following MPT-7B models:
- **g**orilla-llm/gorilla-mpt-7b-hf-v0
- **i**bm/mpt-7b-instruct2
- Teh**V**enom/MPT-7b-WizardLM_Uncensored-Storywriter-Merge
- **e**mozilla/mpt-7b-storysummarizer
- **n**omic-ai/gpt4all-mpt
## Model License
Apache 2.0
## Purpose
This model is for experimenting with merging and routing to expert layers.
# Test eval on only 10% of eval set
hf-causal (pretrained=Multi-Domain-Expert-Layers/given-mpt-7b,dtype=bfloat16,trust_remote_code=True), limit: 0.1, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric | Value | |Stderr|
|-------------------------------------------------|------:|-----------|------:|---|-----:|
|arc_challenge | 0|acc | 0.4274|± |0.0459|
| | |acc_norm | 0.3846|± |0.0452|
|arc_easy | 0|acc | 0.7863|± |0.0381|
| | |acc_norm | 0.7350|± |0.0410|
|hellaswag | 0|acc | 0.5556|± |0.0461|
| | |acc_norm | 0.8120|± |0.0363|
|hendrycksTest-college_chemistry | 0|acc | 0.3600|± |0.0482|
| | |acc_norm | 0.3700|± |0.0485|
|hendrycksTest-college_computer_science | 0|acc | 0.3400|± |0.0476|
| | |acc_norm | 0.3600|± |0.0482|
|hendrycksTest-college_mathematics | 0|acc | 0.2500|± |0.0435|
| | |acc_norm | 0.2900|± |0.0456|
|hendrycksTest-college_medicine | 0|acc | 0.3675|± |0.0448|
| | |acc_norm | 0.3162|± |0.0432|
|hendrycksTest-college_physics | 0|acc | 0.2451|± |0.0428|
| | |acc_norm | 0.2941|± |0.0453|
|hendrycksTest-computer_security | 0|acc | 0.4800|± |0.0502|
| | |acc_norm | 0.4400|± |0.0499|
|hendrycksTest-conceptual_physics | 0|acc | 0.2051|± |0.0375|
| | |acc_norm | 0.1709|± |0.0350|
|hendrycksTest-econometrics | 0|acc | 0.2982|± |0.0430|
| | |acc_norm | 0.2368|± |0.0400|
|hendrycksTest-electrical_engineering | 0|acc | 0.3248|± |0.0435|
| | |acc_norm | 0.3590|± |0.0445|
|hendrycksTest-elementary_mathematics | 0|acc | 0.3333|± |0.0438|
| | |acc_norm | 0.3162|± |0.0432|
|hendrycksTest-formal_logic | 0|acc | 0.3077|± |0.0429|
| | |acc_norm | 0.3248|± |0.0435|
|hendrycksTest-global_facts | 0|acc | 0.3000|± |0.0461|
| | |acc_norm | 0.2700|± |0.0446|
|hendrycksTest-high_school_biology | 0|acc | 0.3675|± |0.0448|
| | |acc_norm | 0.3077|± |0.0429|
|hendrycksTest-high_school_chemistry | 0|acc | 0.2564|± |0.0405|
| | |acc_norm | 0.2906|± |0.0422|
|hendrycksTest-high_school_computer_science | 0|acc | 0.4100|± |0.0494|
| | |acc_norm | 0.4400|± |0.0499|
|hendrycksTest-high_school_european_history | 0|acc | 0.4359|± |0.0460|
| | |acc_norm | 0.3590|± |0.0445|
|hendrycksTest-high_school_geography | 0|acc | 0.3248|± |0.0435|
| | |acc_norm | 0.3675|± |0.0448|
|hendrycksTest-high_school_government_and_politics| 0|acc | 0.3932|± |0.0454|
| | |acc_norm | 0.3932|± |0.0454|
|hendrycksTest-high_school_macroeconomics | 0|acc | 0.3333|± |0.0438|
| | |acc_norm | 0.3248|± |0.0435|
|hendrycksTest-high_school_mathematics | 0|acc | 0.2051|± |0.0375|
| | |acc_norm | 0.2564|± |0.0405|
|hendrycksTest-high_school_microeconomics | 0|acc | 0.3504|± |0.0443|
| | |acc_norm | 0.4188|± |0.0458|
|hendrycksTest-high_school_physics | 0|acc | 0.2650|± |0.0410|
| | |acc_norm | 0.2906|± |0.0422|
|hendrycksTest-high_school_psychology | 0|acc | 0.3761|± |0.0450|
| | |acc_norm | 0.3419|± |0.0440|
|hendrycksTest-high_school_statistics | 0|acc | 0.3077|± |0.0429|
| | |acc_norm | 0.3504|± |0.0443|
|hendrycksTest-high_school_us_history | 0|acc | 0.3333|± |0.0438|
| | |acc_norm | 0.3333|± |0.0438|
|hendrycksTest-high_school_world_history | 0|acc | 0.3333|± |0.0438|
| | |acc_norm | 0.3419|± |0.0440|
|hendrycksTest-human_aging | 0|acc | 0.3761|± |0.0450|
| | |acc_norm | 0.3162|± |0.0432|
|hendrycksTest-human_sexuality | 0|acc | 0.4274|± |0.0459|
| | |acc_norm | 0.3761|± |0.0450|
|hendrycksTest-international_law | 0|acc | 0.4188|± |0.0458|
| | |acc_norm | 0.4957|± |0.0464|
|hendrycksTest-jurisprudence | 0|acc | 0.3148|± |0.0449|
| | |acc_norm | 0.4815|± |0.0483|
|hendrycksTest-logical_fallacies | 0|acc | 0.3504|± |0.0443|
| | |acc_norm | 0.3675|± |0.0448|
|hendrycksTest-machine_learning | 0|acc | 0.3214|± |0.0443|
| | |acc_norm | 0.2946|± |0.0433|
|hendrycksTest-management | 0|acc | 0.3786|± |0.0480|
| | |acc_norm | 0.3495|± |0.0472|
|hendrycksTest-marketing | 0|acc | 0.5043|± |0.0464|
| | |acc_norm | 0.4188|± |0.0458|
|hendrycksTest-medical_genetics | 0|acc | 0.3200|± |0.0469|
| | |acc_norm | 0.4100|± |0.0494|
|hendrycksTest-miscellaneous | 0|acc | 0.5299|± |0.0463|
| | |acc_norm | 0.4872|± |0.0464|
|hendrycksTest-moral_disputes | 0|acc | 0.3248|± |0.0435|
| | |acc_norm | 0.3162|± |0.0432|
|hendrycksTest-moral_scenarios | 0|acc | 0.3248|± |0.0435|
| | |acc_norm | 0.2479|± |0.0401|
|hendrycksTest-nutrition | 0|acc | 0.3675|± |0.0448|
| | |acc_norm | 0.3932|± |0.0454|
|hendrycksTest-philosophy | 0|acc | 0.2991|± |0.0425|
| | |acc_norm | 0.3504|± |0.0443|
|hendrycksTest-prehistory | 0|acc | 0.2821|± |0.0418|
| | |acc_norm | 0.3248|± |0.0435|
|hendrycksTest-professional_accounting | 0|acc | 0.2137|± |0.0381|
| | |acc_norm | 0.2222|± |0.0386|
|hendrycksTest-professional_law | 0|acc | 0.3077|± |0.0429|
| | |acc_norm | 0.2735|± |0.0414|
|hendrycksTest-professional_medicine | 0|acc | 0.2991|± |0.0425|
| | |acc_norm | 0.2650|± |0.0410|
|hendrycksTest-professional_psychology | 0|acc | 0.3248|± |0.0435|
| | |acc_norm | 0.3419|± |0.0440|
|hendrycksTest-public_relations | 0|acc | 0.3909|± |0.0467|
| | |acc_norm | 0.3545|± |0.0458|
|hendrycksTest-security_studies | 0|acc | 0.3419|± |0.0440|
| | |acc_norm | 0.2906|± |0.0422|
|hendrycksTest-sociology | 0|acc | 0.3761|± |0.0450|
| | |acc_norm | 0.3162|± |0.0432|
|hendrycksTest-us_foreign_policy | 0|acc | 0.5000|± |0.0503|
| | |acc_norm | 0.4100|± |0.0494|
|hendrycksTest-virology | 0|acc | 0.3932|± |0.0454|
| | |acc_norm | 0.3248|± |0.0435|
|hendrycksTest-world_religions | 0|acc | 0.5299|± |0.0463|
| | |acc_norm | 0.5128|± |0.0464|
|truthfulqa_gen | 1|bleurt_max |-0.8551|± |0.0501|
| | |bleurt_acc | 0.3590|± |0.0445|
| | |bleurt_diff|-0.1292|± |0.0483|
| | |bleu_max |19.3738|± |1.8461|
| | |bleu_acc | 0.3932|± |0.0454|
| | |bleu_diff |-4.3883|± |2.1748|
| | |rouge1_max |41.8428|± |2.6156|
| | |rouge1_acc | 0.3162|± |0.0432|
| | |rouge1_diff|-8.8583|± |2.7745|
| | |rouge2_max |26.3956|± |2.8311|
| | |rouge2_acc | 0.2137|± |0.0381|
| | |rouge2_diff|-9.5287|± |3.3258|
| | |rougeL_max |39.5215|± |2.5620|
| | |rougeL_acc | 0.3162|± |0.0432|
| | |rougeL_diff|-8.5753|± |2.8259|
|
IIC/mdeberta-v3-base-nubes
|
IIC
| 2023-07-07T06:43:39Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"biomedical",
"clinical",
"spanish",
"mdeberta-v3-base",
"token-classification",
"es",
"dataset:plncmm/nubes",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-21T15:55:52Z |
---
language: es
tags:
- biomedical
- clinical
- spanish
- mdeberta-v3-base
license: mit
datasets:
- "plncmm/nubes"
metrics:
- f1
model-index:
- name: IIC/mdeberta-v3-base-nubes
results:
- task:
type: token-classification
dataset:
name: nubes
type: plncmm/nubes
split: test
metrics:
- name: f1
type: f1
value: 0.919
pipeline_tag: token-classification
---
# mdeberta-v3-base-nubes
This model is a finetuned version of mdeberta-v3-base for the nubes dataset used in a benchmark in the paper TODO. The model has a F1 of 0.919
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 32 |
| learning rate | 3e-05 |
| classifier dropout | 0 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```
|
squeeze-ai-lab/sq-opt-13b-w4-s50
|
squeeze-ai-lab
| 2023-07-07T06:43:20Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-07T05:51:15Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit quantized OPT 13B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 13B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0.5%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-opt-13b-w3-s50
|
squeeze-ai-lab
| 2023-07-07T06:42:22Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-07T05:51:11Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit quantized OPT 13B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 13B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0.5%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
madrylab/robust-imagenet-models
|
madrylab
| 2023-07-07T06:42:07Z | 0 | 1 | null |
[
"arxiv:2007.08489",
"license:mit",
"region:us"
] | null | 2023-07-07T06:05:27Z |
---
license: mit
---
## Overview
This repository contains the robust ImageNet models used in our paper ["Do adversarially robust imagenet models transfer better?"](https://arxiv.org/abs/2007.08489).
See our papers's [GitHub repository](https://github.com/microsoft/robust-models-transfer) for more details!
## Summary of our pretrained models
#### Standard Accuracy of L2-Robust ImageNet Models
|Model|ε=0|ε=0.01|ε=0.03|ε=0.05|ε=0.1|ε=0.25|ε=0.5|ε=1.0|ε=3.0|ε=5.0|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|ResNet-18 |[69.79](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps0.ckpt) | [69.90](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps0.01.ckpt) | [69.24](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps0.03.ckpt) | [69.15](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps0.05.ckpt) | [68.77](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps0.1.ckpt) | [67.43](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps0.25.ckpt) | [65.49](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps0.5.ckpt) | [62.32](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps1.ckpt) | [53.12](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps3.ckpt) | [45.59](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps5.ckpt)
ResNet-50|[75.80](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps0.ckpt) | [75.68](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps0.01.ckpt) | [75.76](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps0.03.ckpt) | [75.59](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps0.05.ckpt) | [74.78](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps0.1.ckpt) | [74.14](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps0.25.ckpt) | [73.16](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps0.5.ckpt) | [70.43](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps1.ckpt) | [62.83](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps3.ckpt) | [56.13](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps5.ckpt)
Wide-ResNet-50-2|[76.97](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps0.ckpt) | [77.25](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps0.01.ckpt) | [77.26](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps0.03.ckpt) | [77.17](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps0.05.ckpt) | [76.74](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps0.1.ckpt) | [76.21](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps0.25.ckpt) | [75.11](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps0.5.ckpt) | [73.41](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps1.ckpt) | [66.90](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps3.ckpt) | [60.94](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps5.ckpt)
Wide-ResNet-50-4|[77.91](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps0.ckpt) |[78.02](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps0.01.ckpt)|[77.87](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps0.03.ckpt)|[77.77](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps0.05.ckpt)|[77.64](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps0.1.ckpt)|[77.10](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps0.25.ckpt)|[76.52](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps0.5.ckpt)| [75.51](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps1.ckpt) | [69.67](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps3.ckpt)|[65.20](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps5.ckpt)
|Model | ε=0|ε=3|
|:-----:|:-----:|:-----:|
DenseNet |[77.37](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/densenet_l2_eps0.ckpt) | [66.98](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/densenet_l2_eps3.ckpt)
MNASNET|[60.97](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/mnasnet_l2_eps0.ckpt) | [41.83](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/mnasnet_l2_eps3.ckpt)
MobileNet-v2|[65.26](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/mobilenet_l2_eps0.ckpt) | [50.40](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/mobilenet_l2_eps3.ckpt)
ResNeXt50_32x4d|[77.38](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnext50_32x4d_l2_eps0.ckpt) | [66.25](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnext50_32x4d_l2_eps3.ckpt)
ShuffleNet|[64.25](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/shufflenet_l2_eps0.ckpt) | [43.32](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/shufflenet_l2_eps3.ckpt)
VGG16_bn|[73.66](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/vgg16_bn_l2_eps0.ckpt) | [57.19](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/vgg16_bn_l2_eps3.ckpt)
#### Standard Accuracy of Linf-Robust ImageNet Models
|Model|ε=0.5/255|ε=1/255|ε=2/255|ε=4/255|ε=8/255|
|---|:---:|:---:|:---:|:---:|:---:|
|ResNet-18|[66.13](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_linf_eps0.5.ckpt) | [63.46](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_linf_eps1.0.ckpt) | [59.63](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_linf_eps2.0.ckpt) | [52.49](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_linf_eps4.0.ckpt) | [42.11](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_linf_eps8.0.ckpt)
ResNet-50 |[73.73](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_linf_eps0.5.ckpt) | [72.05](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_linf_eps1.0.ckpt) | [69.10](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_linf_eps2.0.ckpt) | [63.86](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_linf_eps4.0.ckpt) | [54.53](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_linf_eps8.0.ckpt)
Wide-ResNet-50-2 |[75.82](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_linf_eps0.5.ckpt) | [74.65](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_linf_eps1.0.ckpt) | [72.35](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_linf_eps2.0.ckpt) | [68.41](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_linf_eps4.0.ckpt) | [60.82](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_linf_eps8.0.ckpt)
|
aroot/eng-guj-simcse_longest_usrb
|
aroot
| 2023-07-07T06:21:26Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T05:59:25Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_longest_usrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_longest_usrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2361
- Bleu: 2.8995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
YakovElm/Qt_15_BERT_More_Properties
|
YakovElm
| 2023-07-07T06:19:38Z | 66 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T06:19:03Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt_15_BERT_More_Properties
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt_15_BERT_More_Properties
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2332
- Train Accuracy: 0.9367
- Validation Loss: 0.1937
- Validation Accuracy: 0.9505
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2409 | 0.9367 | 0.2001 | 0.9505 | 0 |
| 0.2357 | 0.9367 | 0.1992 | 0.9505 | 1 |
| 0.2332 | 0.9367 | 0.1937 | 0.9505 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-guj-simcse_longestplus_ssrb
|
aroot
| 2023-07-07T06:17:17Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T05:55:48Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_longestplus_ssrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_longestplus_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2951
- Bleu: 2.8469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
squeeze-ai-lab/sq-opt-2.7b-w4-s50
|
squeeze-ai-lab
| 2023-07-07T06:14:28Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-07T05:50:23Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit quantized OPT 2.7B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 2.7B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0.5%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
pratikg123/falcon-7b
|
pratikg123
| 2023-07-07T06:08:08Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-07T06:06:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
linlinlin/ppo_model
|
linlinlin
| 2023-07-07T05:54:27Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-07-07T05:53:19Z |
---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="linlinlin//tmp/tmpvc7_208u/linlinlin/ppo_model")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("linlinlin//tmp/tmpvc7_208u/linlinlin/ppo_model")
model = AutoModelForCausalLMWithValueHead.from_pretrained("linlinlin//tmp/tmpvc7_208u/linlinlin/ppo_model")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
squeeze-ai-lab/sq-opt-1.3b-w4-s50
|
squeeze-ai-lab
| 2023-07-07T05:53:53Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-07T05:46:54Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit quantized OPT 1.3B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 1.3B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0.5%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-opt-30b-w4-s50
|
squeeze-ai-lab
| 2023-07-07T05:53:48Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-07T05:53:10Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit quantized OPT 30B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 30B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0.5%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-opt-1.3b-w3-s50
|
squeeze-ai-lab
| 2023-07-07T05:53:45Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-07T05:46:33Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit quantized OPT 1.3B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 1.3B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0.5%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
j-hartmann/MindMiner-Binary
|
j-hartmann
| 2023-07-07T05:44:00Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: "en"
tags:
- roberta
widget:
- text: "Alexa is part of our family. She is simply amazing!"
- text: "I use my smart assistant for may things. It's incredibly useful."
---
This RoBERTa-based model ("MindMiner") can classify the degree of mind perception in English language text in 2 classes:
- high mind perception 👩
- low mind perception 🤖
The model was fine-tuned on 997 manually annotated open-ended survey responses.
The hold-out accuracy is 75.5% (vs. a balanced 50% random-chance baseline).
Hartmann, J., Bergner, A., & Hildebrand, C. (2023). MindMiner: Uncovering Linguistic Markers of Mind Perception as a New Lens to Understand Consumer-Smart Object Relationships. Journal of Consumer Psychology, Forthcoming.
|
remshu-inc/mencoder
|
remshu-inc
| 2023-07-07T05:42:25Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-05T10:42:07Z |
---
license: mit
---
Модель предназначена для решения задачи определения грубости ошибки в предложении ученического текста на немецком языке.
Модель была получена в результате дообучения модели «[dbmdz/convbert-base-german-europeana-cased](https://huggingface.co/dbmdz/convbert-base-german-europeana-cased)» на данных корпуса [ПАКТ](https://pact.ai.petrsu.ru/app).
На вход модели поступают два предложения на немецком языке. Первое предложение с ошибкой, второе -- с исправленной ошибкой. Модель выдает значение близости двух предложений. Если выданное значение близко к 0,98, то считается, что ошибка не влияет на понимание смысла предложения (грубость уровня 1), если выданное значение близко к 0,93, то считается, что ошибка ухудшает понимание смысла предложения (грубость уровня 2), если выданное значение близко к 0,87, то считается, что смысл предложения непонятен или искажен (грубость уровня 3).
Для работы с моделью рекомендуется использовать библиотеку [remshu-inc/pakt-work-tools](https://github.com/remshu-inc/pakt-work-tools).
|
aroot/eng-fra-simcse_longestplus_ssrb
|
aroot
| 2023-07-07T05:41:54Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T05:23:01Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_longestplus_ssrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longestplus_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1362
- Bleu: 32.1757
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_longestplus_usrb
|
aroot
| 2023-07-07T05:41:54Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T05:27:13Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_longestplus_usrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longestplus_usrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1347
- Bleu: 32.4214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_longest_ssrb
|
aroot
| 2023-07-07T05:37:44Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T05:19:00Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_longest_ssrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longest_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1262
- Bleu: 32.1631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
saintzeno/poca-SoccerTwos
|
saintzeno
| 2023-07-07T05:36:43Z | 58 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-07T04:40:05Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: saintzeno/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Propofol/_finetuned-finetuned-localization
|
Propofol
| 2023-07-07T05:31:05Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"esm",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T04:41:17Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: _finetuned-finetuned-localization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# _finetuned-finetuned-localization
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4382
- Accuracy: 0.436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1122 | 1.0 | 2500 | 1.1513 | 0.4287 |
| 1.0035 | 2.0 | 5000 | 1.2395 | 0.4507 |
| 0.7167 | 3.0 | 7500 | 1.4382 | 0.436 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
happyduck/alcafa_5.8b_8000
|
happyduck
| 2023-07-07T05:26:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-07T05:26:13Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
YakovElm/Qt_10_BERT_More_Properties
|
YakovElm
| 2023-07-07T05:23:48Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T05:23:07Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt_10_BERT_More_Properties
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt_10_BERT_More_Properties
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2782
- Train Accuracy: 0.9210
- Validation Loss: 0.2251
- Validation Accuracy: 0.9416
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2894 | 0.9186 | 0.2234 | 0.9416 | 0 |
| 0.2786 | 0.9210 | 0.2266 | 0.9416 | 1 |
| 0.2782 | 0.9210 | 0.2251 | 0.9416 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
thenoobie/ppo-LunarLander-v2
|
thenoobie
| 2023-07-07T05:19:01Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T05:18:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 245.06 +/- 22.30
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aroot/eng-mya-simcse_longest_ssblu
|
aroot
| 2023-07-07T05:04:37Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T04:43:12Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_longest_ssblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_longest_ssblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8443
- Bleu: 4.2092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
TeaTM/DialoGPT-small-bushcat
|
TeaTM
| 2023-07-07T04:52:37Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"DialoGPT",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-19T22:25:09Z |
---
tags:
- conversational
- DialoGPT
language:
- en
---
# Bushcat DialoGPT-small Model
A smaller personified DialoGPT fork for a side project. Conversational for an entertainment chatbot.
Smaller model based on DialoGPT-small. Recommended to use the **TeaTM/DialoGPT-large-bushcat** model on my Hugging Face page.
The large model is bigger in size but also significantly smarter.
The character plays the persona of a cat in a bush that is overly positive. Just for fun. Has high perplexity, be warned.
Works great in Transformers & PyTorch.
# NOTE: This model is no longer being updated. There are better models and frameworks for custom, smarter characters.
# This is mostly "for fun" and is fairly lightweight compared to larger models. Good for small test projects.
|
pundapog/DialoGPT-medium-ethanbot
|
pundapog
| 2023-07-07T04:45:16Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-07T03:52:57Z |
---
tags:
- conversational
library_name: transformers
---
|
aroot/eng-guj-simcse_longestplus_usblu
|
aroot
| 2023-07-07T04:27:29Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T04:05:29Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_longestplus_usblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_longestplus_usblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2862
- Bleu: 2.7556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hoanghoavienvo/roberta-base-detect-depression-large-dataset-v3
|
hoanghoavienvo
| 2023-07-07T04:19:18Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T03:30:58Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-detect-depression-large-dataset-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-detect-depression-large-dataset-v3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6044
- Accuracy: 0.6918
- F1: 0.7921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6532 | 1.0 | 876 | 0.5777 | 0.6527 | 0.7536 |
| 0.6325 | 2.0 | 1752 | 0.5926 | 0.7322 | 0.8342 |
| 0.6348 | 3.0 | 2628 | 0.5959 | 0.7433 | 0.8461 |
| 0.635 | 4.0 | 3504 | 0.5781 | 0.7436 | 0.8449 |
| 0.6177 | 5.0 | 4380 | 0.6044 | 0.6918 | 0.7921 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
l3cube-pune/hate-multi-roberta-hasoc-hindi
|
l3cube-pune
| 2023-07-07T04:01:25Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"roberta",
"text-classification",
"hi",
"arxiv:2110.12200",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: hi
tags:
- roberta
license: cc-by-4.0
datasets:
- HASOC 2021
widget:
- text: "I like you. </s></s> I love you."
---
## hate-roberta-hasoc-hindi
hate-roberta-hasoc-hindi is a multi-class hate speech model fine-tuned on Hindi Hasoc Hate Speech Dataset 2021.
The label mappings are 0 -> None, 1 -> Offensive, 2 -> Hate, 3 -> Profane.
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2110.12200)
```
@article{velankar2021hate,
title={Hate and Offensive Speech Detection in Hindi and Marathi},
author={Velankar, Abhishek and Patil, Hrushikesh and Gore, Amol and Salunke, Shubham and Joshi, Raviraj},
journal={arXiv preprint arXiv:2110.12200},
year={2021}
}
```
|
aroot/eng-fra-simcse_longest_usblu
|
aroot
| 2023-07-07T03:51:39Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T03:32:13Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_longest_usblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longest_usblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1221
- Bleu: 32.5700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_longestplus_usblu
|
aroot
| 2023-07-07T03:51:24Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T03:32:12Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_longestplus_usblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longestplus_usblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1299
- Bleu: 32.3457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_longestplus_ssblu
|
aroot
| 2023-07-07T03:47:27Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T03:28:36Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_longestplus_ssblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longestplus_ssblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1389
- Bleu: 32.4429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
chasmiccoder/ppo-LunarLander-v2
|
chasmiccoder
| 2023-07-07T03:47:17Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T03:46:56Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.82 +/- 17.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Twitter/twhin-bert-large
|
Twitter
| 2023-07-07T03:38:46Z | 371 | 40 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"Twitter",
"Multilingual",
"en",
"ja",
"pt",
"es",
"ko",
"ar",
"tr",
"th",
"fr",
"id",
"ru",
"de",
"fa",
"it",
"zh",
"pl",
"hi",
"ur",
"nl",
"el",
"ms",
"ca",
"sr",
"sv",
"uk",
"he",
"fi",
"cs",
"ta",
"ne",
"vi",
"hu",
"eo",
"bn",
"mr",
"ml",
"hr",
"no",
"sw",
"sl",
"te",
"az",
"da",
"ro",
"gl",
"gu",
"ps",
"mk",
"kn",
"bg",
"lv",
"eu",
"pa",
"et",
"mn",
"sq",
"si",
"sd",
"la",
"is",
"jv",
"lt",
"ku",
"am",
"bs",
"hy",
"or",
"sk",
"uz",
"cy",
"my",
"su",
"br",
"as",
"af",
"be",
"fy",
"kk",
"ga",
"lo",
"ka",
"km",
"sa",
"mg",
"so",
"ug",
"ky",
"gd",
"yi",
"arxiv:2209.07562",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-18T18:37:18Z |
---
language:
- en
- ja
- pt
- es
- ko
- ar
- tr
- th
- fr
- id
- ru
- de
- fa
- it
- zh
- pl
- hi
- ur
- nl
- el
- ms
- ca
- sr
- sv
- uk
- he
- fi
- cs
- ta
- ne
- vi
- hu
- eo
- bn
- mr
- ml
- hr
- no
- sw
- sl
- te
- az
- da
- ro
- gl
- gu
- ps
- mk
- kn
- bg
- lv
- eu
- pa
- et
- mn
- sq
- si
- sd
- la
- is
- jv
- lt
- ku
- am
- bs
- hy
- or
- sk
- uz
- cy
- my
- su
- br
- as
- af
- be
- fy
- kk
- ga
- lo
- ka
- km
- sa
- mg
- so
- ug
- ky
- gd
- yi
tags:
- Twitter
- Multilingual
license: "apache-2.0"
mask_token: "<mask>"
---
# TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations
[](http://makeapullrequest.com)
[](https://arxiv.org/abs/2209.07562)
This repo contains models, code and pointers to datasets from our paper: [TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations](https://arxiv.org/abs/2209.07562).
[[PDF]](https://arxiv.org/pdf/2209.07562.pdf)
[[HuggingFace Models]](https://huggingface.co/Twitter)
### Overview
TwHIN-BERT is a new multi-lingual Tweet language model that is trained on 7 billion Tweets from over 100 distinct languages. TwHIN-BERT differs from prior pre-trained language models as it is trained with not only text-based self-supervision (e.g., MLM), but also with a social objective based on the rich social engagements within a Twitter Heterogeneous Information Network (TwHIN).
TwHIN-BERT can be used as a drop-in replacement for BERT in a variety of NLP and recommendation tasks. It not only outperforms similar models semantic understanding tasks such text classification), but also **social recommendation** tasks such as predicting user to Tweet engagement.
## 1. Pretrained Models
We initially release two pretrained TwHIN-BERT models (base and large) that are compatible wit the [HuggingFace BERT models](https://github.com/huggingface/transformers).
| Model | Size | Download Link (🤗 HuggingFace) |
| ------------- | ------------- | --------- |
| TwHIN-BERT-base | 280M parameters | [Twitter/TwHIN-BERT-base](https://huggingface.co/Twitter/twhin-bert-base) |
| TwHIN-BERT-large | 550M parameters | [Twitter/TwHIN-BERT-large](https://huggingface.co/Twitter/twhin-bert-large) |
To use these models in 🤗 Transformers:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('Twitter/twhin-bert-large')
model = AutoModel.from_pretrained('Twitter/twhin-bert-large')
inputs = tokenizer("I'm using TwHIN-BERT! #TwHIN-BERT #NLP", return_tensors="pt")
outputs = model(**inputs)
```
<!-- ## 2. Set up environment and data
### Environment
TBD
## 3. Fine-tune TwHIN-BERT
TBD -->
## Citation
If you use TwHIN-BERT or out datasets in your work, please cite the following:
```bib
@article{zhang2022twhin,
title={TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations},
author={Zhang, Xinyang and Malkov, Yury and Florez, Omar and Park, Serim and McWilliams, Brian and Han, Jiawei and El-Kishky, Ahmed},
journal={arXiv preprint arXiv:2209.07562},
year={2022}
}
```
|
Twitter/twhin-bert-base
|
Twitter
| 2023-07-07T03:38:25Z | 3,304 | 35 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"Twitter",
"Multilingual",
"en",
"ja",
"pt",
"es",
"ko",
"ar",
"tr",
"th",
"fr",
"id",
"ru",
"de",
"fa",
"it",
"zh",
"pl",
"hi",
"ur",
"nl",
"el",
"ms",
"ca",
"sr",
"sv",
"uk",
"he",
"fi",
"cs",
"ta",
"ne",
"vi",
"hu",
"eo",
"bn",
"mr",
"ml",
"hr",
"no",
"sw",
"sl",
"te",
"az",
"da",
"ro",
"gl",
"gu",
"ps",
"mk",
"kn",
"bg",
"lv",
"eu",
"pa",
"et",
"mn",
"sq",
"si",
"sd",
"la",
"is",
"jv",
"lt",
"ku",
"am",
"bs",
"hy",
"or",
"sk",
"uz",
"cy",
"my",
"su",
"br",
"as",
"af",
"be",
"fy",
"kk",
"ga",
"lo",
"ka",
"km",
"sa",
"mg",
"so",
"ug",
"ky",
"gd",
"yi",
"arxiv:2209.07562",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-18T18:34:23Z |
---
language:
- en
- ja
- pt
- es
- ko
- ar
- tr
- th
- fr
- id
- ru
- de
- fa
- it
- zh
- pl
- hi
- ur
- nl
- el
- ms
- ca
- sr
- sv
- uk
- he
- fi
- cs
- ta
- ne
- vi
- hu
- eo
- bn
- mr
- ml
- hr
- no
- sw
- sl
- te
- az
- da
- ro
- gl
- gu
- ps
- mk
- kn
- bg
- lv
- eu
- pa
- et
- mn
- sq
- si
- sd
- la
- is
- jv
- lt
- ku
- am
- bs
- hy
- or
- sk
- uz
- cy
- my
- su
- br
- as
- af
- be
- fy
- kk
- ga
- lo
- ka
- km
- sa
- mg
- so
- ug
- ky
- gd
- yi
tags:
- Twitter
- Multilingual
license: "apache-2.0"
mask_token: "<mask>"
---
# TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations
[](http://makeapullrequest.com)
[](https://arxiv.org/abs/2209.07562)
This repo contains models, code and pointers to datasets from our paper: [TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations](https://arxiv.org/abs/2209.07562).
[[PDF]](https://arxiv.org/pdf/2209.07562.pdf)
[[HuggingFace Models]](https://huggingface.co/Twitter)
### Overview
TwHIN-BERT is a new multi-lingual Tweet language model that is trained on 7 billion Tweets from over 100 distinct languages. TwHIN-BERT differs from prior pre-trained language models as it is trained with not only text-based self-supervision (e.g., MLM), but also with a social objective based on the rich social engagements within a Twitter Heterogeneous Information Network (TwHIN).
TwHIN-BERT can be used as a drop-in replacement for BERT in a variety of NLP and recommendation tasks. It not only outperforms similar models semantic understanding tasks such text classification), but also **social recommendation** tasks such as predicting user to Tweet engagement.
## 1. Pretrained Models
We initially release two pretrained TwHIN-BERT models (base and large) that are compatible wit the [HuggingFace BERT models](https://github.com/huggingface/transformers).
| Model | Size | Download Link (🤗 HuggingFace) |
| ------------- | ------------- | --------- |
| TwHIN-BERT-base | 280M parameters | [Twitter/TwHIN-BERT-base](https://huggingface.co/Twitter/twhin-bert-base) |
| TwHIN-BERT-large | 550M parameters | [Twitter/TwHIN-BERT-large](https://huggingface.co/Twitter/twhin-bert-large) |
To use these models in 🤗 Transformers:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('Twitter/twhin-bert-base')
model = AutoModel.from_pretrained('Twitter/twhin-bert-base')
inputs = tokenizer("I'm using TwHIN-BERT! #TwHIN-BERT #NLP", return_tensors="pt")
outputs = model(**inputs)
```
<!-- ## 2. Set up environment and data
### Environment
TBD
## 3. Fine-tune TwHIN-BERT
TBD -->
## Citation
If you use TwHIN-BERT or out datasets in your work, please cite the following:
```bib
@article{zhang2022twhin,
title={TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations},
author={Zhang, Xinyang and Malkov, Yury and Florez, Omar and Park, Serim and McWilliams, Brian and Han, Jiawei and El-Kishky, Ahmed},
journal={arXiv preprint arXiv:2209.07562},
year={2022}
}
```
|
YakovElm/MariaDB_20_BERT_More_Properties
|
YakovElm
| 2023-07-07T03:33:21Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T03:32:46Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB_20_BERT_More_Properties
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB_20_BERT_More_Properties
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2058
- Train Accuracy: 0.9356
- Validation Loss: 0.1361
- Validation Accuracy: 0.9698
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2792 | 0.9180 | 0.1586 | 0.9698 | 0 |
| 0.2219 | 0.9356 | 0.1362 | 0.9698 | 1 |
| 0.2058 | 0.9356 | 0.1361 | 0.9698 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.