modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 18:30:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 18:30:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mkhan149/output_model7
|
mkhan149
| 2023-06-25T21:14:15Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-25T21:01:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mkhan149/output_model7
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mkhan149/output_model7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.3525
- Validation Loss: 4.5575
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -512, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.3525 | 4.5575 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.11.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sd-concepts-library/mersh-v3
|
sd-concepts-library
| 2023-06-25T20:36:27Z | 0 | 0 | null |
[
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:mit",
"region:us"
] | null | 2023-06-25T20:36:25Z |
---
license: mit
base_model: runwayml/stable-diffusion-v1-5
---
### Mersh V3 on Stable Diffusion
This is the `<mikemersh>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:











































































|
yashgharat/dqn-SpaceInvadersNoFrameskip-v4
|
yashgharat
| 2023-06-25T20:20:45Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T20:20:13Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 472.50 +/- 216.45
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yashgharat -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yashgharat -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga yashgharat
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
crw-dev/Deepinsightinswapper
|
crw-dev
| 2023-06-25T20:19:56Z | 0 | 3 | null |
[
"onnx",
"region:us"
] | null | 2023-06-25T19:05:47Z |
CLONED FROM - https://huggingface.co/deepinsight/inswapper
GITHUB - https://github.com/deepinsight
ROOP GOOGLE COLAB - https://colab.research.google.com/github/Trts-T70/roopColab/blob/main/roopcolab.ipynb
|
trevdoc/ppo-LunarLander-v2
|
trevdoc
| 2023-06-25T20:18:51Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T20:18:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.73 +/- 21.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
heinjan/TI-mobilenetv3-imagenet-v2
|
heinjan
| 2023-06-25T20:15:14Z | 7 | 0 |
tf-keras
|
[
"tf-keras",
"image-classification",
"region:us"
] |
image-classification
| 2023-05-11T07:16:18Z |
---
pipeline_tag: image-classification
---
|
lucasairvc/imaginervc
|
lucasairvc
| 2023-06-25T20:00:02Z | 0 | 0 | null |
[
"license:lgpl-3.0",
"region:us"
] | null | 2023-06-25T19:47:59Z |
---
license: lgpl-3.0
---

[download the voice here](https://huggingface.co/lucasairvc/imaginervc/resolve/main/imagine-oranges.zip)
|
JTStephens/ppo-Huggy
|
JTStephens
| 2023-06-25T19:47:54Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-25T19:47:42Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JTStephens/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
flashvenom/Airoboros-13B-SuperHOT-8K-4bit-GPTQ
|
flashvenom
| 2023-06-25T19:36:16Z | 10 | 6 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T22:01:24Z |
Model upload of Airoboros-13B-SuperHOT in 4-bit GPTQ version, converted using GPTQ-for-LLaMa; Source model from https://huggingface.co/Peeepy/Airoboros-13b-SuperHOT-8k.
## This uses the [Airoboros-13B(v1.2)](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.2) model and applies the [SuperHOT 8K LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) on top, allowing for improved coherence at larger context lenghts, as well as improving output quality of Airoboros to be more verbose.
You will need a monkey-patch at inference to use the 8k context, please see patch file present, if you are using a different inference engine (like llama.cpp / exllama) you will need to add the monkey patch there.
### Note: If you are using exllama the monkey-patch is built into the engine, please use -cpe to set the scaling factor, ie. if you are running it at 4k context, pass `-cpe 2 -l 4096`
Patch file present in repo or can be accessed here: https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test/raw/main/llama_rope_scaled_monkey_patch.py
|
gfalcao/smkfr25jun-nocrop2
|
gfalcao
| 2023-06-25T19:18:48Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-25T19:07:29Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### smkfr25Jun-nocrop2 Dreambooth model trained by gfalcao with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
MindNetML/Reinforce-pixelcopter-v1
|
MindNetML
| 2023-06-25T18:54:20Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T18:53:23Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 37.40 +/- 24.59
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AIDA-UPM/bertweet-base-multi-mami
|
AIDA-UPM
| 2023-06-25T18:42:38Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"misogyny",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
pipeline_tag: text-classification
tags:
- text-classification
- misogyny
language: en
license: apache-2.0
widget:
- text: "Women wear yoga pants because men don't stare at their personality"
example_title: "Misogyny detection"
---
# bertweet-base-multi-mami
This is a Bertweet model: It maps sentences & paragraphs to a 768 dimensional dense vector space and classifies them into 5 multi labels.
# Multilabels
label2id={
"misogynous": 0,
"shaming": 1,
"stereotype": 2,
"objectification": 3,
"violence": 4,
},
|
digiplay/YabaLMixTrue25D_V2.0
|
digiplay
| 2023-06-25T18:14:03Z | 473 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-17T19:11:17Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/60093/yabalmix-true25d
Original Author's DEMO image :
.jpeg)
|
MindNetML/Reinforce-CartPole-v3_bttrLR
|
MindNetML
| 2023-06-25T18:01:53Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T18:01:44Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v3_bttrLR
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
aleeq/tunikkoc
|
aleeq
| 2023-06-25T18:01:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-25T14:37:18Z |
---
license: creativeml-openrail-m
---
|
bogdancazan/pegasus-text-simplification_1e4_adafactor_wikilarge_20epici
|
bogdancazan
| 2023-06-25T17:46:26Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T14:38:22Z |
---
tags:
- generated_from_trainer
model-index:
- name: pegasus-text-simplification_1e4_adafactor_wikilarge_20epici
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-text-simplification_1e4_adafactor_wikilarge_20epici
This model is a fine-tuned version of [google/pegasus-x-base](https://huggingface.co/google/pegasus-x-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9542 | 1.0 | 803 | 0.3416 |
| 0.3111 | 2.0 | 1606 | 0.3372 |
| 0.2919 | 3.0 | 2409 | 0.3356 |
| 0.2659 | 4.0 | 3212 | 0.3389 |
| 0.2476 | 5.0 | 4015 | 0.3421 |
| 0.2351 | 6.0 | 4818 | 0.3474 |
| 0.2215 | 7.0 | 5621 | 0.3496 |
| 0.2141 | 8.0 | 6424 | 0.3548 |
| 0.2015 | 9.0 | 7227 | 0.3607 |
| 0.1921 | 10.0 | 8030 | 0.3628 |
| 0.1863 | 11.0 | 8833 | 0.3706 |
| 0.1794 | 12.0 | 9636 | 0.3734 |
| 0.1753 | 13.0 | 10439 | 0.3781 |
| 0.1697 | 14.0 | 11242 | 0.3814 |
| 0.1659 | 15.0 | 12045 | 0.3839 |
| 0.1626 | 16.0 | 12848 | 0.3878 |
| 0.1591 | 17.0 | 13651 | 0.3890 |
| 0.1575 | 18.0 | 14454 | 0.3921 |
| 0.1556 | 19.0 | 15257 | 0.3921 |
| 0.1545 | 20.0 | 16060 | 0.3934 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
JCTN/RealDosMix
|
JCTN
| 2023-06-25T17:45:06Z | 0 | 1 | null |
[
"license:other",
"region:us"
] | null | 2023-06-25T17:20:07Z |
---
license: other
---
!!pruned fp16 replaced with no ema. The change in quality is less than 1 percent, and we went from 7 GB to 2 GB.
See example picture for prompt.There are recurring quality prompts.
vae-ft-mse-840000-ema-pruned or kl f8 amime2
img2img SD upscale method: scale 20-25, denoising 0.2-0.3 After selecting SD Upscale at the bottom, tile overlap 64, scale factor2
caution! Sampler must be DPM++SDE karras.
clip skip 2
https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.ckpt https://huggingface.co/AIARTCHAN/aichan_blend/tree/main/vae Apply VAE. You will get better color results.
We recommend hiring and upscaling only the pictures whose faces are damaged from being far away.
As it is a semi-realistic model, we do not recommend inappropriate exposure.
There are other dos series as well.
https://civitai.com/models/6250/dosmix
https://civitai.com/models/6437/anidosmix
https://civitai.com/models/8437/ddosmix
---
https://civitai.com/models/6925/realdosmix
|
MariaK/whisper-tiny-minds-v1
|
MariaK
| 2023-06-25T17:33:33Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-25T15:53:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-tiny-minds-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds-v1
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7887
- Wer Ortho: 0.4046
- Wer: 0.3804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 1.7167 | 3.57 | 100 | 1.3603 | 0.5324 | 0.4132 |
| 0.3753 | 7.14 | 200 | 0.5665 | 0.4695 | 0.3894 |
| 0.1274 | 10.71 | 300 | 0.5589 | 0.4626 | 0.3912 |
| 0.0207 | 14.29 | 400 | 0.6216 | 0.4327 | 0.3834 |
| 0.0045 | 17.86 | 500 | 0.6684 | 0.4121 | 0.3697 |
| 0.0017 | 21.43 | 600 | 0.7018 | 0.4171 | 0.3792 |
| 0.0009 | 25.0 | 700 | 0.7218 | 0.4239 | 0.3876 |
| 0.0007 | 28.57 | 800 | 0.7272 | 0.4102 | 0.3781 |
| 0.0005 | 32.14 | 900 | 0.7427 | 0.4077 | 0.3787 |
| 0.0004 | 35.71 | 1000 | 0.7512 | 0.4077 | 0.3787 |
| 0.0004 | 39.29 | 1100 | 0.7573 | 0.4034 | 0.3757 |
| 0.0003 | 42.86 | 1200 | 0.7650 | 0.4027 | 0.3751 |
| 0.0003 | 46.43 | 1300 | 0.7714 | 0.4059 | 0.3769 |
| 0.0002 | 50.0 | 1400 | 0.7759 | 0.4052 | 0.3775 |
| 0.0002 | 53.57 | 1500 | 0.7796 | 0.4077 | 0.3798 |
| 0.0002 | 57.14 | 1600 | 0.7831 | 0.4046 | 0.3798 |
| 0.0002 | 60.71 | 1700 | 0.7858 | 0.4040 | 0.3792 |
| 0.0002 | 64.29 | 1800 | 0.7873 | 0.4040 | 0.3792 |
| 0.0002 | 67.86 | 1900 | 0.7883 | 0.4034 | 0.3792 |
| 0.0002 | 71.43 | 2000 | 0.7887 | 0.4046 | 0.3804 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
spitfire4794/ben-ultra
|
spitfire4794
| 2023-06-25T17:32:11Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-17T14:18:42Z |
---
pipeline_tag: conversational
---
|
andywalner/q-FrozenLake-v1-4x4-noSlippery
|
andywalner
| 2023-06-25T17:04:59Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T17:04:57Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="andywalner/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
roshan77/Taxi-v3_qlearning
|
roshan77
| 2023-06-25T17:04:05Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T17:04:03Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3_qlearning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="roshan77/Taxi-v3_qlearning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
foch3/Watersmudge
|
foch3
| 2023-06-25T17:01:14Z | 0 | 3 | null |
[
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-23T10:08:26Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
---
**Please read creativeml-openrail-m license before using it.**
It enhances watercolor style and overall saturation.
If you worrying about pickle detected, **download safetensor one**. The only difference is LoRa cover image.
*It works better with following prompts, **(watercolor \(medium\):1.2), ink wash painting, (sketch:1.2)***
<img src="https://huggingface.co/foch3/Watersmudge/resolve/main/1.png">
<img src="https://huggingface.co/foch3/Watersmudge/resolve/main/2.png">
|
roshan77/q-FrozenLake-v1-4x4-noSlippery
|
roshan77
| 2023-06-25T16:55:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T16:55:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="roshan77/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Smaraa/gpt2-text-simplification_1e4_adafactor_newsela
|
Smaraa
| 2023-06-25T16:14:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T12:15:13Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-text-simplification_1e4_adafactor_newsela
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-text-simplification_1e4_adafactor_newsela
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7662 | 1.0 | 1605 | 0.8757 |
| 0.6538 | 2.0 | 3210 | 0.9019 |
| 0.5663 | 3.0 | 4815 | 0.9554 |
| 0.4961 | 4.0 | 6420 | 0.9990 |
| 0.4299 | 5.0 | 8025 | 1.0271 |
| 0.3853 | 6.0 | 9630 | 1.0547 |
| 0.3482 | 7.0 | 11235 | 1.1090 |
| 0.3152 | 8.0 | 12840 | 1.1387 |
| 0.2903 | 9.0 | 14445 | 1.1853 |
| 0.2655 | 10.0 | 16050 | 1.2088 |
| 0.2477 | 11.0 | 17655 | 1.2168 |
| 0.232 | 12.0 | 19260 | 1.2426 |
| 0.2192 | 13.0 | 20865 | 1.2522 |
| 0.2078 | 14.0 | 22470 | 1.2855 |
| 0.198 | 15.0 | 24075 | 1.3048 |
| 0.19 | 16.0 | 25680 | 1.3117 |
| 0.1834 | 17.0 | 27285 | 1.3262 |
| 0.1777 | 18.0 | 28890 | 1.3360 |
| 0.1733 | 19.0 | 30495 | 1.3440 |
| 0.1702 | 20.0 | 32100 | 1.3465 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
VilohitT/t5-small-finetuned-xsum
|
VilohitT
| 2023-06-25T16:14:08Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T13:04:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MrM0dZ/UMP45_Mineuchi_Tomomi
|
MrM0dZ
| 2023-06-25T16:05:33Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2023-06-25T15:54:38Z |
---
license: other
---
UMP45 RVC v2 Model
Trained using in-game voices
Currently with 100 Epochs
|
Green-Sky/ggml_openai_clip-vit-base-patch32
|
Green-Sky
| 2023-06-25T16:03:57Z | 0 | 0 | null |
[
"clip",
"vision",
"ggml",
"clip.cpp",
"region:us"
] | null | 2023-06-25T15:44:22Z |
---
tags:
- clip
- vision
- ggml
- clip.cpp
---
# Experimental
the file format is not stable yet, so expect breaking changes. I will update the files from time to time.
- source model: https://huggingface.co/openai/clip-vit-base-patch32
- source license: non-comercial custom (see [modelcard](./model-card.md))
## Converted files for use with clip.cpp
see https://github.com/monatis/clip.cpp
|
carblacac/ner-investing
|
carblacac
| 2023-06-25T16:03:08Z | 106 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"finance",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-25T15:56:09Z |
---
license: apache-2.0
language:
- en
tags:
- finance
---
|
cagmfr/q-Taxi-v3
|
cagmfr
| 2023-06-25T15:35:26Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T15:25:40Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cagmfr/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
lucasbertola/q-Taxi-v3
|
lucasbertola
| 2023-06-25T15:31:33Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"Lucas_is_the_best",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T15:27:06Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
- Lucas_is_the_best
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing1
This is a trained model of a **Q-Learning** agent playing
## Usage
```python
model = load_from_hub(repo_id="lucasbertola/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=True etc)
env = gym.make(model["env_id"])
```
|
sumyahhh/ppo-LunarLander-v2
|
sumyahhh
| 2023-06-25T15:31:19Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T15:30:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -136.15 +/- 52.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PhongLe1311/my_awesome_billsum_model
|
PhongLe1311
| 2023-06-25T15:30:09Z | 99 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T15:20:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1408
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5181
- Rouge1: 0.1408
- Rouge2: 0.0514
- Rougel: 0.1173
- Rougelsum: 0.1173
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8150 | 0.1264 | 0.0373 | 0.1061 | 0.1061 | 19.0 |
| No log | 2.0 | 124 | 2.5989 | 0.1379 | 0.0501 | 0.1164 | 0.1165 | 19.0 |
| No log | 3.0 | 186 | 2.5349 | 0.1396 | 0.0525 | 0.1179 | 0.1181 | 19.0 |
| No log | 4.0 | 248 | 2.5181 | 0.1408 | 0.0514 | 0.1173 | 0.1173 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahessamb/bertopic-test
|
ahessamb
| 2023-06-25T15:29:15Z | 3 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-06-25T15:29:09Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# bertopic-test
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("ahessamb/bertopic-test")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 50
* Number of training documents: 1570
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| 0 | liquidations - forcefully - betting - liquidation - contracts | 8 | 0_liquidations_forcefully_betting_liquidation |
| 1 | litecoin - wsm - presale - 77 - near | 94 | 1_litecoin_wsm_presale_77 |
| 2 | sec - court - terraform - dismiss - lawyers | 49 | 2_sec_court_terraform_dismiss |
| 3 | huobi - hkvac - bsl - web3 - code | 12 | 3_huobi_hkvac_bsl_web3 |
| 4 | lucie - shiba - susbarium - puppynet - portals | 3 | 4_lucie_shiba_susbarium_puppynet |
| 5 | 000006819 - shiba - accuracy - finbold - estimates | 27 | 5_000006819_shiba_accuracy_finbold |
| 6 | tokens - sec - binance - securities - coinbase | 45 | 6_tokens_sec_binance_securities |
| 7 | mckinsey - ai - nanjing - productivity - diffusion | 43 | 7_mckinsey_ai_nanjing_productivity |
| 8 | resistance - swing - fib - zone - ltc | 32 | 8_resistance_swing_fib_zone |
| 9 | brinkman - tategpt - bitcoin - artists - wealth | 26 | 9_brinkman_tategpt_bitcoin_artists |
| 10 | stablecoin - stablecoins - decline - redemptions - tusd | 2 | 10_stablecoin_stablecoins_decline_redemptions |
| 11 | mutant - mayc - bayc - club - mcmullen | 64 | 11_mutant_mayc_bayc_club |
| 12 | xrp - ema - ripple - bullish - cryptocurrencies | 43 | 12_xrp_ema_ripple_bullish |
| 13 | tether - cbdcs - loans - federal - nafcu | 27 | 13_tether_cbdcs_loans_federal |
| 14 | rate - tradingview - bnb - breakout - coinmarketcap | 85 | 14_rate_tradingview_bnb_breakout |
| 15 | 26 - bulls - rsi - ceiling - 300 | 2 | 15_26_bulls_rsi_ceiling |
| 16 | lowest - jump - week - wallet - staggering | 3 | 16_lowest_jump_week_wallet |
| 17 | xrp - ripple - mekras - sbi - institutions | 56 | 17_xrp_ripple_mekras_sbi |
| 18 | debt - mortgages - trillion - government - suspends | 3 | 18_debt_mortgages_trillion_government |
| 19 | longitude - chronometer - bitcoin - ships - graffiti | 2 | 19_longitude_chronometer_bitcoin_ships |
| 20 | volumes - piggy - aud - xrp - usdt | 15 | 20_volumes_piggy_aud_xrp |
| 21 | root - ledger - stakers - sidechains - compatibility | 4 | 21_root_ledger_stakers_sidechains |
| 22 | astra - letter - concerns - investors - bitwise | 4 | 22_astra_letter_concerns_investors |
| 23 | gold - governments - manipulated - stocks - mined | 10 | 23_gold_governments_manipulated_stocks |
| 24 | tether - sygnum - documents - bank - coindesk | 9 | 24_tether_sygnum_documents_bank |
| 25 | rewards - governance - lido - proposal - june | 45 | 25_rewards_governance_lido_proposal |
| 26 | listings - coin - fairerc20 - bittrex - withdrawals | 68 | 26_listings_coin_fairerc20_bittrex |
| 27 | peaq - ordibots - cosmos - fetch - machine | 81 | 27_peaq_ordibots_cosmos_fetch |
| 28 | uniswap - v4 - orders - hooks - differing | 23 | 28_uniswap_v4_orders_hooks |
| 29 | price - neo - matic - rise - altcoin | 92 | 29_price_neo_matic_rise |
| 30 | emptydoc - staff - policy - binance - workspaces | 2 | 30_emptydoc_staff_policy_binance |
| 31 | lunc - synthetix - terra - perps - staking | 33 | 31_lunc_synthetix_terra_perps |
| 32 | tweet - dogecoin - chart - meme - negative | 3 | 32_tweet_dogecoin_chart_meme |
| 33 | binance - securities - exchange - cz - regulators | 63 | 33_binance_securities_exchange_cz |
| 34 | bitmart - sale - xrp - discount - event | 4 | 34_bitmart_sale_xrp_discount |
| 35 | yuan - event - olympics - canadians - organizers | 49 | 35_yuan_event_olympics_canadians |
| 36 | gusd - fidelity - bitcoin - proposal - blackrock | 52 | 36_gusd_fidelity_bitcoin_proposal |
| 37 | bills - mcglone - markets - stablecoins - liquidity | 56 | 37_bills_mcglone_markets_stablecoins |
| 38 | asset - gain - drop - trading - hours | 2 | 38_asset_gain_drop_trading |
| 39 | epstein - hamsterwheel - vulnerability - bounty - certick | 28 | 39_epstein_hamsterwheel_vulnerability_bounty |
| 40 | pyth - transparency - data - terra - oracle | 19 | 40_pyth_transparency_data_terra |
| 41 | shiba - inu - weighted - collapse - recovery | 2 | 41_shiba_inu_weighted_collapse |
| 42 | neo - opensea - carey - security - impersonators | 24 | 42_neo_opensea_carey_security |
| 43 | balancer - zkevm - liquidity - defi - 8020 | 3 | 43_balancer_zkevm_liquidity_defi |
| 44 | reed - battle - platform - argument - trading | 22 | 44_reed_battle_platform_argument |
| 45 | ada - cardano - whale - sell - investors | 4 | 45_ada_cardano_whale_sell |
| 46 | uk - coinbase - hong - crypto - regulatory | 65 | 46_uk_coinbase_hong_crypto |
| 47 | ethereum - tvl - defi - arbitrum - airdrop | 54 | 47_ethereum_tvl_defi_arbitrum |
| 48 | swyftx - shibarium - token - shibaswap - shiba | 54 | 48_swyftx_shibarium_token_shibaswap |
| 49 | bitcoin - mining - gain - miners - difficulty | 54 | 49_bitcoin_mining_gain_miners |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.30.2
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.12
|
cagmfr/q-FrozenLake-v1-4x4-noSlippery
|
cagmfr
| 2023-06-25T15:20:16Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T15:20:14Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="cagmfr/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NasimB/gpt2-2-dp-mod-aochild-cut
|
NasimB
| 2023-06-25T15:09:04Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T07:34:36Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-2-dp-mod-aochild-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-2-dp-mod-aochild-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7147 | 0.27 | 500 | 5.6451 |
| 5.3609 | 0.54 | 1000 | 5.2108 |
| 5.0162 | 0.81 | 1500 | 4.9585 |
| 4.7627 | 1.08 | 2000 | 4.8126 |
| 4.5775 | 1.35 | 2500 | 4.7013 |
| 4.4856 | 1.62 | 3000 | 4.6034 |
| 4.4038 | 1.89 | 3500 | 4.5175 |
| 4.2252 | 2.16 | 4000 | 4.4775 |
| 4.1408 | 2.42 | 4500 | 4.4236 |
| 4.1136 | 2.69 | 5000 | 4.3721 |
| 4.0852 | 2.96 | 5500 | 4.3281 |
| 3.87 | 3.23 | 6000 | 4.3418 |
| 3.8651 | 3.5 | 6500 | 4.3062 |
| 3.8601 | 3.77 | 7000 | 4.2781 |
| 3.8091 | 4.04 | 7500 | 4.2785 |
| 3.5972 | 4.31 | 8000 | 4.2888 |
| 3.6301 | 4.58 | 8500 | 4.2678 |
| 3.6398 | 4.85 | 9000 | 4.2396 |
| 3.4906 | 5.12 | 9500 | 4.2803 |
| 3.3704 | 5.39 | 10000 | 4.2849 |
| 3.4008 | 5.66 | 10500 | 4.2718 |
| 3.4029 | 5.93 | 11000 | 4.2491 |
| 3.1804 | 6.2 | 11500 | 4.3116 |
| 3.1361 | 6.47 | 12000 | 4.3119 |
| 3.1532 | 6.73 | 12500 | 4.3067 |
| 3.1591 | 7.0 | 13000 | 4.3072 |
| 2.8974 | 7.27 | 13500 | 4.3563 |
| 2.9167 | 7.54 | 14000 | 4.3589 |
| 2.9248 | 7.81 | 14500 | 4.3580 |
| 2.8683 | 8.08 | 15000 | 4.3791 |
| 2.741 | 8.35 | 15500 | 4.3939 |
| 2.7503 | 8.62 | 16000 | 4.3968 |
| 2.7573 | 8.89 | 16500 | 4.3983 |
| 2.6961 | 9.16 | 17000 | 4.4075 |
| 2.6562 | 9.43 | 17500 | 4.4101 |
| 2.6653 | 9.7 | 18000 | 4.4107 |
| 2.667 | 9.97 | 18500 | 4.4109 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Smaraa/t5-text-simplification_1e4_adafactor_biendata
|
Smaraa
| 2023-06-25T15:07:10Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T12:37:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-text-simplification_1e4_adafactor_biendata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-text-simplification_1e4_adafactor_biendata
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7562
- Rouge1: 10.4603
- Rouge2: 2.642
- Rougel: 9.6362
- Rougelsum: 9.6589
- Gen Len: 13.2838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 464 | 0.5489 | 29.7693 | 11.1997 | 25.6091 | 25.5979 | 14.7281 |
| 0.9314 | 2.0 | 928 | 0.5392 | 29.9099 | 10.9645 | 25.334 | 25.3259 | 14.7188 |
| 0.5594 | 3.0 | 1392 | 0.5342 | 30.3194 | 11.4204 | 25.8248 | 25.8255 | 14.7666 |
| 0.5333 | 4.0 | 1856 | 0.5376 | 30.8368 | 11.6152 | 26.3172 | 26.3583 | 14.1578 |
| 0.5192 | 5.0 | 2320 | 0.8890 | 7.5517 | 1.4313 | 7.0971 | 7.1064 | 9.9191 |
| 0.8897 | 6.0 | 2784 | 0.8252 | 6.9283 | 1.3484 | 6.5916 | 6.5877 | 10.9894 |
| 0.9385 | 7.0 | 3248 | 0.7971 | 8.2401 | 1.9957 | 7.7693 | 7.7675 | 10.7732 |
| 0.9089 | 8.0 | 3712 | 0.7725 | 9.7559 | 2.2249 | 9.0272 | 9.0098 | 10.7175 |
| 0.8824 | 9.0 | 4176 | 0.7552 | 12.006 | 2.8041 | 11.0115 | 10.992 | 10.7838 |
| 0.8658 | 10.0 | 4640 | 0.7490 | 13.311 | 3.4159 | 12.1933 | 12.1551 | 10.6499 |
| 0.864 | 11.0 | 5104 | 0.7448 | 13.9983 | 3.6176 | 12.7712 | 12.7347 | 10.752 |
| 0.868 | 12.0 | 5568 | 0.7495 | 12.318 | 3.2975 | 11.3451 | 11.3218 | 12.0252 |
| 0.8844 | 13.0 | 6032 | 0.7552 | 10.6154 | 2.7347 | 9.8228 | 9.8116 | 13.191 |
| 0.8844 | 14.0 | 6496 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8971 | 15.0 | 6960 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8981 | 16.0 | 7424 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8956 | 17.0 | 7888 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8984 | 18.0 | 8352 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8959 | 19.0 | 8816 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8977 | 20.0 | 9280 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ammag/ppo-LunarLander-v2
|
ammag
| 2023-06-25T15:01:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T14:57:51Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 228.98 +/- 31.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Smaraa/gpt2-text-simplification_1e4_adafactor_biendata
|
Smaraa
| 2023-06-25T14:56:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T12:42:47Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-text-simplification_1e4_adafactor_biendata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-text-simplification_1e4_adafactor_biendata
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 464 | 0.7729 |
| 1.0489 | 2.0 | 928 | 0.7546 |
| 0.754 | 3.0 | 1392 | 0.7497 |
| 0.7034 | 4.0 | 1856 | 0.7530 |
| 0.6619 | 5.0 | 2320 | 0.7560 |
| 0.6265 | 6.0 | 2784 | 0.7639 |
| 0.5921 | 7.0 | 3248 | 0.7747 |
| 0.5621 | 8.0 | 3712 | 0.7848 |
| 0.5359 | 9.0 | 4176 | 0.7969 |
| 0.5115 | 10.0 | 4640 | 0.8113 |
| 0.4879 | 11.0 | 5104 | 0.8256 |
| 0.4683 | 12.0 | 5568 | 0.8373 |
| 0.4491 | 13.0 | 6032 | 0.8519 |
| 0.4491 | 14.0 | 6496 | 0.8642 |
| 0.4324 | 15.0 | 6960 | 0.8741 |
| 0.4176 | 16.0 | 7424 | 0.8841 |
| 0.4054 | 17.0 | 7888 | 0.8924 |
| 0.3946 | 18.0 | 8352 | 0.8994 |
| 0.3868 | 19.0 | 8816 | 0.9043 |
| 0.3813 | 20.0 | 9280 | 0.9089 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
LoneWolfVPS/ArteYou
|
LoneWolfVPS
| 2023-06-25T14:31:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-25T14:27:06Z |
---
license: creativeml-openrail-m
---
|
mouaadblhn/ppo-huggy
|
mouaadblhn
| 2023-06-25T14:03:22Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-25T14:03:16Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mouaadblhn/ppo-huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
flobbit/flutterby
|
flobbit
| 2023-06-25T13:45:00Z | 5 | 0 |
fastai
|
[
"fastai",
"en",
"image classification",
"image-classification",
"doi:10.57967/hf/1004",
"license:apache-2.0",
"model-index",
"region:us"
] |
image-classification
| 2023-06-25T13:01:00Z |
---
license: apache-2.0
tags:
- en
- image classification
- fastai
model-index:
- name: flutterby by flobbit
results:
- task:
name: image classification
type: image-classification
metrics:
- name: accuracy
type: acc
num_train_epochs: 10
learning_rate: 0.00363
value: 77.3
metrics:
- accuracy
pipeline_tag: image-classification
---
# FlutterBy ST Swallowtail Butterfly Insect Classification
## Model description
The model is used to classify images into one of the 51 North American swallowtail or cattleheart butterfly species. `resnet50` was used for training.
## Intended uses & limitations
The model was trained on 8577 insect images spread over 51 species. The model is likely biased toward some species being more commonly found in certain habitats.
## Training and evaluation data
The images used in training were obtained from GBIF:
GBIF.org (22 June 2023) GBIF Occurrence Download https://doi.org/10.15468/dl.bqg8bw
Only the first 400 images of each species (if available) were downloaded. The image set was partially cleaned for quality to remove caterpillars, poor images or butterflies that were too far away for proper ID. After "cleaning", 200 additional images were downloaded for Battus philenor and Battus polydamas (as those species had a very high percentage of caterpillar shots).
The dataset is primarily "in the wild" shots rather than all staged poses, and includes images for which even an expert would not be able to see identifying characteristics (hence the lower overall accuracy).
The image set had 33 species with over 200 images (after cleaning) and a minimum of 30 pics in a class for the less uncommon species (not enough for accurate training but included for completeness).
|
ahishamm/vit-huge-HAM-10000-sharpened-patch-14
|
ahishamm
| 2023-06-25T13:34:12Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T12:41:46Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-huge-HAM-10000-sharpened-patch-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-huge-HAM-10000-sharpened-patch-14
This model is a fine-tuned version of [google/vit-huge-patch14-224-in21k](https://huggingface.co/google/vit-huge-patch14-224-in21k) on the ahishamm/HAM_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4411
- Accuracy: 0.8554
- Recall: 0.8554
- F1: 0.8554
- Precision: 0.8554
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.6177 | 0.2 | 100 | 0.7082 | 0.7591 | 0.7591 | 0.7591 | 0.7591 |
| 0.6848 | 0.4 | 200 | 0.6570 | 0.7631 | 0.7631 | 0.7631 | 0.7631 |
| 0.622 | 0.6 | 300 | 0.5880 | 0.7920 | 0.7920 | 0.7920 | 0.7920 |
| 0.5887 | 0.8 | 400 | 0.5599 | 0.7965 | 0.7965 | 0.7965 | 0.7965 |
| 0.4812 | 1.0 | 500 | 0.5364 | 0.8010 | 0.8010 | 0.8010 | 0.8010 |
| 0.4013 | 1.2 | 600 | 0.4874 | 0.8249 | 0.8249 | 0.8249 | 0.8249 |
| 0.3987 | 1.4 | 700 | 0.4533 | 0.8354 | 0.8354 | 0.8354 | 0.8354 |
| 0.4118 | 1.6 | 800 | 0.4540 | 0.8424 | 0.8424 | 0.8424 | 0.8424 |
| 0.3272 | 1.8 | 900 | 0.4536 | 0.8254 | 0.8254 | 0.8254 | 0.8254 |
| 0.3318 | 2.0 | 1000 | 0.4411 | 0.8554 | 0.8554 | 0.8554 | 0.8554 |
| 0.0859 | 2.2 | 1100 | 0.4641 | 0.8519 | 0.8519 | 0.8519 | 0.8519 |
| 0.1026 | 2.4 | 1200 | 0.4692 | 0.8554 | 0.8554 | 0.8554 | 0.8554 |
| 0.0934 | 2.59 | 1300 | 0.4555 | 0.8474 | 0.8474 | 0.8474 | 0.8474 |
| 0.1084 | 2.79 | 1400 | 0.5017 | 0.8454 | 0.8454 | 0.8454 | 0.8454 |
| 0.0603 | 2.99 | 1500 | 0.4803 | 0.8599 | 0.8599 | 0.8599 | 0.8599 |
| 0.013 | 3.19 | 1600 | 0.4905 | 0.8633 | 0.8633 | 0.8633 | 0.8633 |
| 0.0585 | 3.39 | 1700 | 0.5305 | 0.8678 | 0.8678 | 0.8678 | 0.8678 |
| 0.0322 | 3.59 | 1800 | 0.5342 | 0.8648 | 0.8648 | 0.8648 | 0.8648 |
| 0.0086 | 3.79 | 1900 | 0.5134 | 0.8668 | 0.8668 | 0.8668 | 0.8668 |
| 0.0275 | 3.99 | 2000 | 0.5136 | 0.8693 | 0.8693 | 0.8693 | 0.8693 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
findnitai/FaceGen
|
findnitai
| 2023-06-25T13:25:03Z | 138 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-24T03:47:05Z |
---
license: apache-2.0
pipeline_tag: text-to-image
---
Few examples of unique faces generated by the model. Trained on FFHQ dataset.

|
S3S3/q-Taxi-v3
|
S3S3
| 2023-06-25T13:05:40Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T13:05:36Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.44 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="S3S3/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
paramrah/shoesv2
|
paramrah
| 2023-06-25T13:00:03Z | 2 | 0 |
tf-keras
|
[
"tf-keras",
"mobilenet",
"image-classification",
"region:us"
] |
image-classification
| 2023-06-25T12:59:39Z |
---
pipeline_tag: image-classification
---
|
bogdancazan/bart-base-newsela-biendata-with-domain-adaptation
|
bogdancazan
| 2023-06-25T12:57:32Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-19T14:35:21Z |
training_args = TrainingArguments(
output_dir='bart-base-newsela-biendata-with-domain-adaptation',
num_train_epochs=20,
warmup_steps=250,
per_device_train_batch_size=BATCH_SIZE,
weight_decay=0.01,
learning_rate=2e-4,
fp16=True,
optim="adafactor",
)
Step Training Loss
500 5.677000
1000 2.361900
1500 1.826000
2000 1.672900
2500 1.597900
3000 1.555700
3500 1.520600
4000 1.496300
4500 1.476800
TrainOutput(global_step=4640, training_loss=2.1116079396214977, metrics={'train_runtime': 1059.6025, 'train_samples_per_second': 279.992, 'train_steps_per_second': 4.379, 'total_flos': 0.0, 'train_loss': 2.1116079396214977, 'epoch': 20.0})
|
S3S3/q-FrozenLake-v1-4x4-noSlippery
|
S3S3
| 2023-06-25T12:53:11Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T12:53:07Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="S3S3/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
OpenDILabCommunity/PongNoFrameskip-v4-PPOOffPolicy
|
OpenDILabCommunity
| 2023-06-25T12:47:43Z | 0 | 0 |
pytorch
|
[
"pytorch",
"deep-reinforcement-learning",
"reinforcement-learning",
"DI-engine",
"PongNoFrameskip-v4",
"en",
"license:apache-2.0",
"region:us"
] |
reinforcement-learning
| 2023-06-25T12:47:02Z |
---
language: en
license: apache-2.0
library_name: pytorch
tags:
- deep-reinforcement-learning
- reinforcement-learning
- DI-engine
- PongNoFrameskip-v4
benchmark_name: OpenAI/Gym/Atari
task_name: PongNoFrameskip-v4
pipeline_tag: reinforcement-learning
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: OpenAI/Gym/Atari-PongNoFrameskip-v4
type: OpenAI/Gym/Atari-PongNoFrameskip-v4
metrics:
- type: mean_reward
value: 21.0 +/- 0.0
name: mean_reward
---
# Play **PongNoFrameskip-v4** with **PPO** Policy
## Model Description
<!-- Provide a longer summary of what this model is. -->
This is a simple **PPO** implementation to OpenAI/Gym/Atari **PongNoFrameskip-v4** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo).
**DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework.
## Model Usage
### Install the Dependencies
<details close>
<summary>(Click for Details)</summary>
```shell
# install huggingface_ding
git clone https://github.com/opendilab/huggingface_ding.git
pip3 install -e ./huggingface_ding/
# install environment dependencies if needed
pip3 install DI-engine[common_env]
```
</details>
### Git Clone from Huggingface and Run the Model
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import PPOOffPolicyAgent
from ding.config import Config
from easydict import EasyDict
import torch
# Pull model from files which are git cloned from huggingface
policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
cfg = EasyDict(Config.file_to_dict("policy_config.py"))
# Instantiate the agent
agent = PPOOffPolicyAgent(
env="PongNoFrameskip", exp_name="PongNoFrameskip-v4-PPOOffPolicy", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
### Run Model by Using Huggingface_ding
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import PPOOffPolicyAgent
from huggingface_ding import pull_model_from_hub
# Pull model from Hugggingface hub
policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/PongNoFrameskip-v4-PPOOffPolicy")
# Instantiate the agent
agent = PPOOffPolicyAgent(
env="PongNoFrameskip", exp_name="PongNoFrameskip-v4-PPOOffPolicy", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
## Model Training
### Train the Model and Push to Huggingface_hub
<details close>
<summary>(Click for Details)</summary>
```shell
#Training Your Own Agent
python3 -u train.py
```
**train.py**
```python
from ding.bonus import PPOOffPolicyAgent
from huggingface_ding import push_model_to_hub
# Instantiate the agent
agent = PPOOffPolicyAgent(env="PongNoFrameskip", exp_name="PongNoFrameskip-v4-PPOOffPolicy")
# Train the agent
return_ = agent.train(step=int(10000000))
# Push model to huggingface hub
push_model_to_hub(
agent=agent.best,
env_name="OpenAI/Gym/Atari",
task_name="PongNoFrameskip-v4",
algo_name="PPO",
wandb_url=return_.wandb_url,
github_repo_url="https://github.com/opendilab/DI-engine",
github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/ppo.html",
github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/atari.html",
installation_guide="pip3 install DI-engine[common_env]",
usage_file_by_git_clone="./ppo_offpolicy/pong_ppo_offpolicy_deploy.py",
usage_file_by_huggingface_ding="./ppo_offpolicy/pong_ppo_offpolicy_download.py",
train_file="./ppo_offpolicy/pong_ppo_offpolicy.py",
repo_id="OpenDILabCommunity/PongNoFrameskip-v4-PPOOffPolicy"
)
```
</details>
**Configuration**
<details close>
<summary>(Click for Details)</summary>
```python
exp_config = {
'env': {
'manager': {
'episode_num': float("inf"),
'max_retry': 1,
'retry_type': 'reset',
'auto_reset': True,
'step_timeout': None,
'reset_timeout': None,
'retry_waiting_time': 0.1,
'cfg_type': 'BaseEnvManagerDict'
},
'stop_value': 20,
'n_evaluator_episode': 8,
'collector_env_num': 8,
'evaluator_env_num': 8,
'env_id': 'PongNoFrameskip-v4',
'frame_stack': 4
},
'policy': {
'model': {
'obs_shape': [4, 84, 84],
'action_shape': 6,
'action_space': 'discrete',
'encoder_hidden_size_list': [64, 64, 128],
'actor_head_hidden_size': 128,
'critic_head_hidden_size': 128
},
'learn': {
'learner': {
'train_iterations': 1000000000,
'dataloader': {
'num_workers': 0
},
'log_policy': True,
'hook': {
'load_ckpt_before_run': '',
'log_show_after_iter': 100,
'save_ckpt_after_iter': 10000,
'save_ckpt_after_run': True
},
'cfg_type': 'BaseLearnerDict'
},
'update_per_collect': 10,
'batch_size': 320,
'learning_rate': 0.0003,
'value_weight': 0.5,
'entropy_weight': 0.001,
'clip_ratio': 0.2,
'adv_norm': True,
'ignore_done': False,
'grad_clip_type': 'clip_norm',
'grad_clip_value': 0.5
},
'collect': {
'collector': {},
'unroll_len': 1,
'discount_factor': 0.99,
'gae_lambda': 0.95,
'n_sample': 3200
},
'eval': {
'evaluator': {
'eval_freq': 1000,
'render': {
'render_freq': -1,
'mode': 'train_iter'
},
'cfg_type': 'InteractionSerialEvaluatorDict',
'stop_value': 20,
'n_episode': 8
}
},
'other': {
'replay_buffer': {
'replay_buffer_size': 10000
}
},
'on_policy': False,
'cuda': True,
'multi_gpu': False,
'bp_update_sync': True,
'traj_len_inf': False,
'type': 'ppo',
'priority': False,
'priority_IS_weight': False,
'nstep_return': False,
'nstep': 3,
'transition_with_policy_data': True,
'cfg_type': 'PPOOffPolicyDict',
'recompute_adv': True,
'action_space': 'discrete'
},
'exp_name': 'PongNoFrameskip-v4-PPOOffPolicy',
'wandb_logger': {
'gradient_logger': True,
'video_logger': True,
'plot_logger': True,
'action_logger': True,
'return_logger': False
},
'seed': 0
}
```
</details>
**Training Procedure**
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/PongNoFrameskip-v4-PPOOffPolicy)
## Model Information
<!-- Provide the basic links for the model. -->
- **Github Repository:** [repo link](https://github.com/opendilab/DI-engine)
- **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/ppo.html)
- **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/PongNoFrameskip-v4-PPOOffPolicy/blob/main/policy_config.py)
- **Demo:** [video](https://huggingface.co/OpenDILabCommunity/PongNoFrameskip-v4-PPOOffPolicy/blob/main/replay.mp4)
<!-- Provide the size information for the model. -->
- **Parameters total size:** 11501.55 KB
- **Last Update Date:** 2023-06-25
## Environments
<!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
- **Benchmark:** OpenAI/Gym/Atari
- **Task:** PongNoFrameskip-v4
- **Gym version:** 0.25.1
- **DI-engine version:** v0.4.8
- **PyTorch version:** 1.7.1
- **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/atari.html)
|
AtomGradient/Adjust_ChatGLM_6B
|
AtomGradient
| 2023-06-25T12:45:31Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"chatglm",
"feature-extraction",
"custom_code",
"license:other",
"region:us"
] |
feature-extraction
| 2023-06-25T12:04:00Z |
---
license: other
---
```
from transformers import AutoConfig, AutoModel, AutoTokenizer
import os
import torch
# 载入Tokenizer
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
config = AutoConfig.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True, pre_seq_len=128)
model = AutoModel.from_pretrained("THUDM/chatglm-6b", config=config, trust_remote_code=True)
prefix_state_dict = torch.load(os.path.join("./Adjust_ChatGLM_6B/", "pytorch_model.bin"))
new_prefix_state_dict = {}
for k, v in prefix_state_dict.items():
if k.startswith("transformer.prefix_encoder."):
new_prefix_state_dict[k[len("transformer.prefix_encoder."):]] = v
model.transformer.prefix_encoder.load_state_dict(new_prefix_state_dict)
model = model.quantize(4)
model = model.half().cuda()
model.transformer.prefix_encoder.float()
model = model.eval()
response, history = model.chat(tokenizer, "生成衬衣的广告词", history=[])
print(response)
```
|
ahishamm/vit-base-HAM-10000-sharpened-large-patch-32
|
ahishamm
| 2023-06-25T12:32:21Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T11:51:12Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-HAM-10000-sharpened-large-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-HAM-10000-sharpened-large-patch-32
This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the ahishamm/HAM_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4582
- Accuracy: 0.8404
- Recall: 0.8404
- F1: 0.8404
- Precision: 0.8404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.6739 | 0.2 | 100 | 0.7775 | 0.7257 | 0.7257 | 0.7257 | 0.7257 |
| 0.6922 | 0.4 | 200 | 0.6455 | 0.7711 | 0.7711 | 0.7711 | 0.7711 |
| 0.8219 | 0.6 | 300 | 0.7582 | 0.7426 | 0.7426 | 0.7426 | 0.7426 |
| 0.6801 | 0.8 | 400 | 0.6363 | 0.7651 | 0.7651 | 0.7651 | 0.7651 |
| 0.5499 | 1.0 | 500 | 0.6231 | 0.7751 | 0.7751 | 0.7751 | 0.7751 |
| 0.5156 | 1.2 | 600 | 0.6399 | 0.7761 | 0.7761 | 0.7761 | 0.7761 |
| 0.4478 | 1.4 | 700 | 0.5324 | 0.8020 | 0.8020 | 0.8020 | 0.8020 |
| 0.4364 | 1.6 | 800 | 0.5597 | 0.7970 | 0.7970 | 0.7970 | 0.7970 |
| 0.4545 | 1.8 | 900 | 0.5212 | 0.8115 | 0.8115 | 0.8115 | 0.8115 |
| 0.4294 | 2.0 | 1000 | 0.4926 | 0.8264 | 0.8264 | 0.8264 | 0.8264 |
| 0.135 | 2.2 | 1100 | 0.5448 | 0.8204 | 0.8204 | 0.8204 | 0.8204 |
| 0.2628 | 2.4 | 1200 | 0.4916 | 0.8304 | 0.8304 | 0.8304 | 0.8304 |
| 0.2577 | 2.59 | 1300 | 0.4582 | 0.8404 | 0.8404 | 0.8404 | 0.8404 |
| 0.2093 | 2.79 | 1400 | 0.5079 | 0.8344 | 0.8344 | 0.8344 | 0.8344 |
| 0.1415 | 2.99 | 1500 | 0.4760 | 0.8439 | 0.8439 | 0.8439 | 0.8439 |
| 0.0686 | 3.19 | 1600 | 0.5379 | 0.8444 | 0.8444 | 0.8444 | 0.8444 |
| 0.1031 | 3.39 | 1700 | 0.5572 | 0.8384 | 0.8384 | 0.8384 | 0.8384 |
| 0.102 | 3.59 | 1800 | 0.5343 | 0.8464 | 0.8464 | 0.8464 | 0.8464 |
| 0.0531 | 3.79 | 1900 | 0.5482 | 0.8479 | 0.8479 | 0.8479 | 0.8479 |
| 0.0756 | 3.99 | 2000 | 0.5454 | 0.8454 | 0.8454 | 0.8454 | 0.8454 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Luke537/image_classification_food_model
|
Luke537
| 2023-06-25T12:30:18Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-24T19:15:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: image_classification_food_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: train[:5000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.893
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6474
- Accuracy: 0.893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7587 | 0.99 | 62 | 2.5481 | 0.844 |
| 1.8903 | 2.0 | 125 | 1.8096 | 0.874 |
| 1.6502 | 2.98 | 186 | 1.6474 | 0.893 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.0
- Tokenizers 0.13.3
|
emilianJR/majicMIX_realistic_v6
|
emilianJR
| 2023-06-25T12:26:15Z | 69 | 14 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-18T12:42:51Z |
---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Diffuser model for this SD checkpoint:
https://civitai.com/models/43331/majicmix-realistic
**emilianJR/majicMIX_realistic_v6** is the HuggingFace diffuser that you can use with **diffusers.StableDiffusionPipeline()**.
Examples | Examples | Examples
---- | ---- | ----
 |  | 
 |  | 
-------
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "emilianJR/majicMIX_realistic_v6"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "YOUR PROMPT"
image = pipe(prompt).images[0]
image.save("image.png")
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
energytrain7/distilbert-base-uncased-finetuned-emotion
|
energytrain7
| 2023-06-25T12:26:13Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-25T07:27:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9251973092238958
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2235
- Accuracy: 0.925
- F1: 0.9252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8583 | 1.0 | 250 | 0.3353 | 0.899 | 0.8952 |
| 0.2609 | 2.0 | 500 | 0.2235 | 0.925 | 0.9252 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
bogdancazan/t5-base-newsela-biendata-with-domain-adaptation
|
bogdancazan
| 2023-06-25T12:24:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-19T13:46:06Z |
training_args = TrainingArguments(
output_dir='t5-base-wikilarge-newsela-with-domain-adaptation',
num_train_epochs=20,
warmup_steps=250,
per_device_train_batch_size=BATCH_SIZE,
weight_decay=0.01,
learning_rate=2e-4,
# fp16=True,
optim="adafactor",
)
Step Training Loss
500 4.184500
1000 2.470900
1500 2.128900
2000 1.951600
2500 1.834400
3000 1.755800
3500 1.701800
4000 1.656300
4500 1.628800
TrainOutput(global_step=4640, training_loss=2.1286644540984057, metrics={'train_runtime': 4090.6694, 'train_samples_per_second': 72.526, 'train_steps_per_second': 1.134, 'total_flos': 0.0, 'train_loss': 2.1286644540984057, 'epoch': 20.0})
|
Tri1/opus-mt-en-ro-finetuned-eng-to-para
|
Tri1
| 2023-06-25T12:21:10Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T09:20:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-eng-to-para
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-eng-to-para
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0821
- Bleu: 22.2055
- Gen Len: 21.7942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.0865 | 1.0 | 6250 | 0.0821 | 22.2055 | 21.7942 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
joystick/Initokyo
|
joystick
| 2023-06-25T12:18:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-25T12:10:06Z |
---
license: creativeml-openrail-m
---
|
ahishamm/vit-base-HAM-10000-sharpened-large-patch-16
|
ahishamm
| 2023-06-25T11:49:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T10:38:43Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-HAM-10000-sharpened-large-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-HAM-10000-sharpened-large-patch-16
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the ahishamm/HAM_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5504
- Accuracy: 0.8075
- Recall: 0.8075
- F1: 0.8075
- Precision: 0.8075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.9294 | 0.2 | 100 | 1.0377 | 0.6733 | 0.6733 | 0.6733 | 0.6733 |
| 1.0067 | 0.4 | 200 | 0.8976 | 0.6813 | 0.6813 | 0.6813 | 0.6813 |
| 1.0081 | 0.6 | 300 | 0.9345 | 0.6773 | 0.6773 | 0.6773 | 0.6773 |
| 0.9326 | 0.8 | 400 | 0.8494 | 0.6883 | 0.6883 | 0.6883 | 0.6883 |
| 0.8243 | 1.0 | 500 | 0.7481 | 0.7267 | 0.7267 | 0.7267 | 0.7267 |
| 0.7408 | 1.2 | 600 | 0.7277 | 0.7317 | 0.7317 | 0.7317 | 0.7317 |
| 0.6844 | 1.4 | 700 | 0.7114 | 0.7392 | 0.7392 | 0.7392 | 0.7392 |
| 0.7411 | 1.6 | 800 | 0.6772 | 0.7416 | 0.7416 | 0.7416 | 0.7416 |
| 0.7138 | 1.8 | 900 | 0.7136 | 0.7377 | 0.7377 | 0.7377 | 0.7377 |
| 0.5838 | 2.0 | 1000 | 0.6625 | 0.7521 | 0.7521 | 0.7521 | 0.7521 |
| 0.5315 | 2.2 | 1100 | 0.6104 | 0.7776 | 0.7776 | 0.7776 | 0.7776 |
| 0.6391 | 2.4 | 1200 | 0.6317 | 0.7591 | 0.7591 | 0.7591 | 0.7591 |
| 0.6903 | 2.59 | 1300 | 0.6098 | 0.7656 | 0.7656 | 0.7656 | 0.7656 |
| 0.5798 | 2.79 | 1400 | 0.6211 | 0.7751 | 0.7751 | 0.7751 | 0.7751 |
| 0.5448 | 2.99 | 1500 | 0.5824 | 0.7820 | 0.7820 | 0.7820 | 0.7820 |
| 0.4523 | 3.19 | 1600 | 0.5951 | 0.7776 | 0.7776 | 0.7776 | 0.7776 |
| 0.4485 | 3.39 | 1700 | 0.6114 | 0.7815 | 0.7815 | 0.7815 | 0.7815 |
| 0.487 | 3.59 | 1800 | 0.5730 | 0.7950 | 0.7950 | 0.7950 | 0.7950 |
| 0.4104 | 3.79 | 1900 | 0.5597 | 0.7965 | 0.7965 | 0.7965 | 0.7965 |
| 0.4468 | 3.99 | 2000 | 0.5504 | 0.8075 | 0.8075 | 0.8075 | 0.8075 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Smaraa/bart-text-simplification_1e4_adafactor
|
Smaraa
| 2023-06-25T11:45:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-24T11:26:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-text-simplification_1e4_adafactor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-text-simplification_1e4_adafactor
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8377
- Rouge1: 60.5348
- Rouge2: 41.6762
- Rougel: 55.5994
- Rougelsum: 55.5841
- Gen Len: 18.7487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.1741 | 1.0 | 1163 | 0.6416 | 62.4 | 44.1316 | 57.9029 | 57.8644 | 18.8482 |
| 0.1553 | 2.0 | 2326 | 0.6504 | 62.2879 | 43.9281 | 57.4714 | 57.461 | 18.8063 |
| 0.1369 | 3.0 | 3489 | 0.6656 | 61.2481 | 42.605 | 56.5118 | 56.4636 | 18.733 |
| 0.1286 | 4.0 | 4652 | 0.6906 | 61.3015 | 42.1608 | 56.2688 | 56.1707 | 18.7487 |
| 0.1141 | 5.0 | 5815 | 0.7082 | 62.1771 | 43.1481 | 57.0231 | 57.0673 | 18.911 |
| 0.1016 | 6.0 | 6978 | 0.7188 | 61.408 | 42.2759 | 56.1699 | 56.1779 | 18.8377 |
| 0.0961 | 7.0 | 8141 | 0.7334 | 60.802 | 41.9149 | 56.0171 | 56.0279 | 18.8168 |
| 0.0869 | 8.0 | 9304 | 0.7509 | 60.6564 | 41.3587 | 55.4436 | 55.468 | 18.7382 |
| 0.0783 | 9.0 | 10467 | 0.7713 | 60.3551 | 41.8074 | 55.6856 | 55.679 | 18.7173 |
| 0.0751 | 10.0 | 11630 | 0.7785 | 60.378 | 41.6134 | 55.5217 | 55.505 | 18.8325 |
| 0.0679 | 11.0 | 12793 | 0.7835 | 60.5835 | 41.6735 | 55.5469 | 55.5791 | 18.7435 |
| 0.0619 | 12.0 | 13956 | 0.8012 | 60.8152 | 41.2014 | 55.7186 | 55.7233 | 18.9424 |
| 0.0611 | 13.0 | 15119 | 0.8091 | 60.8188 | 41.8074 | 55.6684 | 55.8026 | 18.7958 |
| 0.0568 | 14.0 | 16282 | 0.8175 | 60.9209 | 41.5689 | 55.8838 | 55.8642 | 18.7277 |
| 0.0527 | 15.0 | 17445 | 0.8250 | 61.0215 | 41.9079 | 55.9018 | 55.8709 | 18.9162 |
| 0.0524 | 16.0 | 18608 | 0.8317 | 60.8214 | 41.6554 | 55.8053 | 55.7947 | 18.7277 |
| 0.0504 | 17.0 | 19771 | 0.8310 | 60.6533 | 41.6507 | 55.9289 | 55.9426 | 18.7958 |
| 0.0486 | 18.0 | 20934 | 0.8345 | 60.4722 | 41.5319 | 55.3384 | 55.3655 | 18.6859 |
| 0.0491 | 19.0 | 22097 | 0.8379 | 60.4012 | 41.2452 | 55.5059 | 55.5553 | 18.8115 |
| 0.0489 | 20.0 | 23260 | 0.8377 | 60.5348 | 41.6762 | 55.5994 | 55.5841 | 18.7487 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
PraveenJesu/openai-whisper-medium-peft-lora-colab
|
PraveenJesu
| 2023-06-25T11:43:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-25T11:43:33Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Erfan2001/distilbert_NoTokenized
|
Erfan2001
| 2023-06-25T11:43:35Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-24T22:00:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xxx
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xxx
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6856
- Accuracy: 0.7758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7996 | 1.0 | 4284 | 0.7921 | 0.7287 |
| 0.5539 | 2.0 | 8568 | 0.6856 | 0.7758 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/gpt2-2-og-concat-modified-aochild
|
NasimB
| 2023-06-25T11:41:21Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T06:55:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-2-og-concat-modified-aochild
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-2-og-concat-modified-aochild
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 5.9891 | 0.24 | 500 | 5.0538 |
| 4.7513 | 0.48 | 1000 | 4.6760 |
| 4.4523 | 0.72 | 1500 | 4.4485 |
| 4.2602 | 0.96 | 2000 | 4.3053 |
| 4.0617 | 1.21 | 2500 | 4.2166 |
| 3.9742 | 1.45 | 3000 | 4.1365 |
| 3.9095 | 1.69 | 3500 | 4.0632 |
| 3.8462 | 1.93 | 4000 | 3.9949 |
| 3.6761 | 2.17 | 4500 | 3.9718 |
| 3.6346 | 2.41 | 5000 | 3.9336 |
| 3.613 | 2.65 | 5500 | 3.8883 |
| 3.5949 | 2.89 | 6000 | 3.8502 |
| 3.4561 | 3.13 | 6500 | 3.8626 |
| 3.387 | 3.38 | 7000 | 3.8393 |
| 3.3931 | 3.62 | 7500 | 3.8152 |
| 3.395 | 3.86 | 8000 | 3.7882 |
| 3.2751 | 4.1 | 8500 | 3.8162 |
| 3.1697 | 4.34 | 9000 | 3.8117 |
| 3.1949 | 4.58 | 9500 | 3.7952 |
| 3.1957 | 4.82 | 10000 | 3.7726 |
| 3.1301 | 5.06 | 10500 | 3.8013 |
| 2.9449 | 5.3 | 11000 | 3.8132 |
| 2.9803 | 5.54 | 11500 | 3.8048 |
| 2.9921 | 5.79 | 12000 | 3.7903 |
| 2.9654 | 6.03 | 12500 | 3.8054 |
| 2.7336 | 6.27 | 13000 | 3.8363 |
| 2.7653 | 6.51 | 13500 | 3.8379 |
| 2.7754 | 6.75 | 14000 | 3.8285 |
| 2.777 | 6.99 | 14500 | 3.8186 |
| 2.5506 | 7.23 | 15000 | 3.8731 |
| 2.5598 | 7.47 | 15500 | 3.8769 |
| 2.5731 | 7.71 | 16000 | 3.8768 |
| 2.5762 | 7.96 | 16500 | 3.8744 |
| 2.4267 | 8.2 | 17000 | 3.9055 |
| 2.4121 | 8.44 | 17500 | 3.9110 |
| 2.4249 | 8.68 | 18000 | 3.9133 |
| 2.4157 | 8.92 | 18500 | 3.9140 |
| 2.366 | 9.16 | 19000 | 3.9237 |
| 2.3398 | 9.4 | 19500 | 3.9252 |
| 2.3398 | 9.64 | 20000 | 3.9263 |
| 2.3365 | 9.88 | 20500 | 3.9262 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
edfryo/bangkelser
|
edfryo
| 2023-06-25T11:39:27Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-05-09T11:58:00Z |
---
license: bigscience-openrail-m
---
|
jondurbin/airoboros-13b-gpt4-1.4-fp16
|
jondurbin
| 2023-06-25T11:39:17Z | 1,423 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T10:46:42Z |
---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4
---
float16 version of https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4
|
Ryukijano/DialoGPT_med_model
|
Ryukijano
| 2023-06-25T11:38:19Z | 118 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-24T12:37:08Z |
Hello there , this bot is trained on DialoGTP for an epoch of 45
|
czz23/journal-setfit-model
|
czz23
| 2023-06-25T10:37:43Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-25T10:34:44Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# /var/folders/hy/pfb50fjs4zd8cznz_yjwyw8w0000gp/T/tmpg6l_fkqj/czz23/journal-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/hy/pfb50fjs4zd8cznz_yjwyw8w0000gp/T/tmpg6l_fkqj/czz23/journal-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
siddh4rth/fintuned-falcon-7b-truthful-qa
|
siddh4rth
| 2023-06-25T10:36:25Z | 4 | 0 |
peft
|
[
"peft",
"RefinedWebModel",
"custom_code",
"4-bit",
"region:us"
] | null | 2023-06-25T09:46:47Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
jiyuanq/falcon-40b-instruct-gptq-128g-act
|
jiyuanq
| 2023-06-25T10:35:13Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"RefinedWeb",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T08:31:32Z |
---
library_name: transformers
pipeline_tag: text-generation
---
falcon-40b-instruct quantized with GPTQ using the script in https://github.com/huggingface/text-generation-inference/pull/438
- group size: 128
- act order: true
- nsamples: 128
- dataset: wikitext2
|
ahishamm/vit-base-HAM-10000-sharpened-patch-32
|
ahishamm
| 2023-06-25T10:35:04Z | 192 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T10:06:47Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-HAM-10000-sharpened-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-HAM-10000-sharpened-patch-32
This model is a fine-tuned version of [google/vit-base-patch32-224-in21k](https://huggingface.co/google/vit-base-patch32-224-in21k) on the ahishamm/HAM_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4806
- Accuracy: 0.8369
- Recall: 0.8369
- F1: 0.8369
- Precision: 0.8369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.8099 | 0.2 | 100 | 0.8060 | 0.7247 | 0.7247 | 0.7247 | 0.7247 |
| 0.7437 | 0.4 | 200 | 0.7020 | 0.7541 | 0.7541 | 0.7541 | 0.7541 |
| 0.7982 | 0.6 | 300 | 0.7352 | 0.7411 | 0.7411 | 0.7411 | 0.7411 |
| 0.7646 | 0.8 | 400 | 0.6603 | 0.7626 | 0.7626 | 0.7626 | 0.7626 |
| 0.6141 | 1.0 | 500 | 0.6373 | 0.7771 | 0.7771 | 0.7771 | 0.7771 |
| 0.5934 | 1.2 | 600 | 0.6141 | 0.7820 | 0.7820 | 0.7820 | 0.7820 |
| 0.5524 | 1.4 | 700 | 0.5621 | 0.8030 | 0.8030 | 0.8030 | 0.8030 |
| 0.5057 | 1.6 | 800 | 0.6074 | 0.7855 | 0.7855 | 0.7855 | 0.7855 |
| 0.5519 | 1.8 | 900 | 0.5486 | 0.7990 | 0.7990 | 0.7990 | 0.7990 |
| 0.4784 | 2.0 | 1000 | 0.5382 | 0.8060 | 0.8060 | 0.8060 | 0.8060 |
| 0.2592 | 2.2 | 1100 | 0.5237 | 0.8165 | 0.8165 | 0.8165 | 0.8165 |
| 0.3872 | 2.4 | 1200 | 0.5345 | 0.8120 | 0.8120 | 0.8120 | 0.8120 |
| 0.2506 | 2.59 | 1300 | 0.5061 | 0.8214 | 0.8214 | 0.8214 | 0.8214 |
| 0.2907 | 2.79 | 1400 | 0.4940 | 0.8354 | 0.8354 | 0.8354 | 0.8354 |
| 0.2436 | 2.99 | 1500 | 0.4806 | 0.8369 | 0.8369 | 0.8369 | 0.8369 |
| 0.1472 | 3.19 | 1600 | 0.5231 | 0.8219 | 0.8219 | 0.8219 | 0.8219 |
| 0.1441 | 3.39 | 1700 | 0.5452 | 0.8329 | 0.8329 | 0.8329 | 0.8329 |
| 0.1327 | 3.59 | 1800 | 0.5410 | 0.8354 | 0.8354 | 0.8354 | 0.8354 |
| 0.0615 | 3.79 | 1900 | 0.5473 | 0.8424 | 0.8424 | 0.8424 | 0.8424 |
| 0.0943 | 3.99 | 2000 | 0.5490 | 0.8409 | 0.8409 | 0.8409 | 0.8409 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
abhishek-kumar/dreambooth_test
|
abhishek-kumar
| 2023-06-25T10:34:42Z | 30 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-24T16:02:54Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - abhishek-kumar/output
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Omogo/xlm-roberta-base-finetuned-panx-de
|
Omogo
| 2023-06-25T10:27:58Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-25T07:39:34Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8602627537962806
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1355
- F1: 0.8603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2574 | 1.0 | 525 | 0.1627 | 0.8221 |
| 0.1295 | 2.0 | 1050 | 0.1435 | 0.8467 |
| 0.0815 | 3.0 | 1575 | 0.1355 | 0.8603 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
TheBloke/orca_mini_3B-GGML
|
TheBloke
| 2023-06-25T10:25:04Z | 0 | 59 |
transformers
|
[
"transformers",
"en",
"dataset:psmathur/alpaca_orca",
"dataset:psmathur/dolly-v2_orca",
"dataset:psmathur/WizardLM_Orca",
"arxiv:2306.02707",
"license:mit",
"region:us"
] | null | 2023-06-24T22:33:56Z |
---
inference: false
license: mit
language:
- en
library_name: transformers
datasets:
- psmathur/alpaca_orca
- psmathur/dolly-v2_orca
- psmathur/WizardLM_Orca
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Pankaj Mathur's Orca Mini 3B GGML
These files are GGML format model files for [Pankaj Mathur's Orca Mini 3B](https://huggingface.co/psmathur/orca_mini_3b).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_3B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_3b)
## Prompt template:
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
prompt
### Response:
```
or
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
prompt
### Input:
input
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These cannot be provided with Open Llama 3B models at this time, due to an issue in llama.cpp.
This is being worked on in the llama.cpp repo. More issues here: https://github.com/ggerganov/llama.cpp/issues/1919
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| orca-mini-3b.ggmlv3.q4_0.bin | q4_0 | 4 | 1.93 GB | 4.43 GB | Original llama.cpp quant method, 4-bit. |
| orca-mini-3b.ggmlv3.q4_1.bin | q4_1 | 4 | 2.14 GB | 4.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| orca-mini-3b.ggmlv3.q5_0.bin | q5_0 | 5 | 2.36 GB | 4.86 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| orca-mini-3b.ggmlv3.q5_1.bin | q5_1 | 5 | 2.57 GB | 5.07 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| orca-mini-3b.ggmlv3.q8_0.bin | q8_0 | 8 | 3.64 GB | 6.14 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m orca-mini-3b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\nYou are an story writing assistant who writes very long, detailed and interesting stories\n\n### User:\nWrite a story about llamas\n\n### Input:\n{input}\n\n### Response:\n"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Pankaj Mathur's Orca Mini 3B
# orca_mini_3b
An [OpenLLaMa-3B model](https://github.com/openlm-research/open_llama) model trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
# Dataset
We build explain tuned [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
Please see below example usage how the **System** prompt is added before each **instruction**.
# Training
The training configurations are provided in the table below.
The training takes on 8x A100(80G) GPUs and lasts for around 4 Hours for cost of $48 using [Lambda Labs](https://lambdalabs.com)
We used DeepSpeed with fully sharded data parallelism, also know as [ZeRO stage 3](https://engineering.fb.com/2021/07/15/open-source/fsdp/) by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [OpenAlpaca repo](https://github.com/yxuansu/OpenAlpaca)
Here are some of params used during training:
|||
|:-------------:|:-------------:|
|*batch_size*|64|
|*train_micro_batch_size_per_gpu*|4|
|*gradient_accumulation_steps*|2|
|*Learning rate*|2e-5|
|*Max length*|1024|
|*Epochs*|3|
|*Optimizer*|AdamW|
# Example Usage
Below shows an example on how to use this model
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
# Hugging Face model_path
model_path = 'psmathur/orca_mini_3b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
#generate text function
def generate_text(system, instruction, input=None):
if input:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
else:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n"
tokens = tokenizer.encode(prompt)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to('cuda')
instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length+instance['generate_len'],
use_cache=True,
do_sample=True,
top_p=instance['top_p'],
temperature=instance['temperature'],
top_k=instance['top_k']
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f'[!] Response: {string}'
# Sample Test Instruction Used by Youtuber Sam Witteveen https://www.youtube.com/@samwitteveenai
system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.'
instruction = 'Write a letter to Sam Altman, CEO of OpenAI, requesting him to convert GPT4 a private model by OpenAI to an open source project'
print(generate_text(system, instruction))
```
```
[!] Response:
Dear Sam Altman,
I am writing to request that you convert the GPT4 private model developed by OpenAI to an open source project. As a user of OpenAI, I have been waiting for the day when I can use the advanced natural language processing capabilities of GPT4 in a more open and accessible way.
While OpenAI has made significant progress in developing AI applications, it has primarily focused on building private models that are not accessible to the general public. However, with the recent release of GPT-3, there is a growing demand for more open and accessible AI tools.
Converting GPT4 to an open source project would allow for greater transparency, collaboration, and innovation. It would also help to build trust in the technology and ensure that it is used ethically and responsibly.
I urge you to consider converting GPT4 to an open source project. This would be a significant contribution to the AI community and would help to create a more open and accessible future.
Thank you for your consideration.
Sincerely,
[Your Name]
```
**P.S. I am #opentowork and #collaboration, if you can help, please reach out to me at psmathur.public@gmail.com**
Next Goals:
1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions)
2) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui)
3) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here)
Limitations & Biases:
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Disclaimer:
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
Please cosult an attorney before using this model for commercial purposes.
Citiation:
If you found wizardlm_alpaca_dolly_orca_open_llama_3b useful in your research or applications, please kindly cite using the following BibTeX:
```
@misc{wizardlm_alpaca_dolly_orca_open_llama_3b,
author = {Pankaj Mathur},
title = {wizardlm_alpaca_dolly_orca_open_llama_3b: An explain tuned OpenLLaMA-3b model on custom wizardlm, alpaca, & dolly datasets},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://github.com/pankajarm/wizardlm_alpaca_dolly_orca_open_llama_3b}, \url{https://https://huggingface.co/psmathur/wizardlm_alpaca_dolly_orca_open_llama_3b}},
}
```
```
@software{openlm2023openllama,
author = {Xinyang Geng and Hao Liu},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@misc{openalpaca,
author = {Yixuan Su and Tian Lan and Deng Cai},
title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}},
}
```
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
|
Sp1786/mutliclass-sentiment-analysis-bert
|
Sp1786
| 2023-06-25T10:22:55Z | 4 | 0 |
transformers
|
[
"transformers",
"bert",
"code",
"text-classification",
"en",
"dataset:Sp1786/multiclass-sentiment-analysis-dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-21T11:23:59Z |
---
license: apache-2.0
datasets:
- Sp1786/multiclass-sentiment-analysis-dataset
language:
- en
metrics:
- bleu
- sacrebleu
library_name: transformers
pipeline_tag: text-classification
tags:
- code
---
|
kbondar17/test-trainer
|
kbondar17
| 2023-06-25T10:12:41Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-25T10:06:32Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: test-trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4009
- F1: 0.6363
- Roc Auc: 0.7682
- Accuracy: 0.6079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 125 | 0.2975 | 0.5710 | 0.7129 | 0.4693 |
| No log | 2.0 | 250 | 0.3742 | 0.6226 | 0.7621 | 0.6013 |
| No log | 3.0 | 375 | 0.4009 | 0.6363 | 0.7682 | 0.6079 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
joohwan/2222333l-gd
|
joohwan
| 2023-06-25T10:05:13Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-25T08:10:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: 2222333l-gd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2222333l-gd
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0984
- Wer: 13.1908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0206 | 0.18 | 500 | 0.1634 | 17.8738 |
| 0.0496 | 0.36 | 1000 | 0.1403 | 12.4680 |
| 0.0516 | 0.54 | 1500 | 0.1123 | 10.2394 |
| 0.0755 | 0.72 | 2000 | 0.0984 | 13.1908 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
bogdancazan/t5-small-newsela-biendata-with-domain-adaptation
|
bogdancazan
| 2023-06-25T09:45:44Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-19T11:56:49Z |
training_args = TrainingArguments(
output_dir='t5-small-newsela-biendata-with-domain-adaptation',
num_train_epochs=20,
warmup_steps=250,
per_device_train_batch_size=BATCH_SIZE,
weight_decay=0.01,
learning_rate=2e-4,
fp16=True,
optim="adafactor",
)
Step Training Loss
500 35.466600
1000 25.795400
1500 10.923200
2000 4.515500
TrainOutput(global_step=2320, training_loss=16.92537920721646, metrics={'train_runtime': 628.0033, 'train_samples_per_second': 472.418, 'train_steps_per_second': 3.694, 'total_flos': 0.0, 'train_loss': 16.92537920721646, 'epoch': 20.0})
|
sd-concepts-library/pokemon-raichu-sd-model
|
sd-concepts-library
| 2023-06-25T09:26:29Z | 0 | 0 | null |
[
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:mit",
"region:us"
] | null | 2023-06-25T09:26:28Z |
---
license: mit
base_model: stabilityai/stable-diffusion-2
---
### Pokemon Raichu - SD model on Stable Diffusion
This is the `<cat-toy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
ahishamm/vit-base-HAM-10000-sharpened
|
ahishamm
| 2023-06-25T09:17:26Z | 190 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T08:42:48Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-HAM-10000-sharpened
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-HAM-10000-sharpened
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ahishamm/HAM_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4392
- Accuracy: 0.8529
- Recall: 0.8529
- F1: 0.8529
- Precision: 0.8529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.7303 | 0.2 | 100 | 0.7828 | 0.7197 | 0.7197 | 0.7197 | 0.7197 |
| 0.7198 | 0.4 | 200 | 0.7519 | 0.7377 | 0.7377 | 0.7377 | 0.7377 |
| 0.7519 | 0.6 | 300 | 0.7125 | 0.7541 | 0.7541 | 0.7541 | 0.7541 |
| 0.6657 | 0.8 | 400 | 0.6623 | 0.7571 | 0.7571 | 0.7571 | 0.7571 |
| 0.5896 | 1.0 | 500 | 0.5964 | 0.7835 | 0.7835 | 0.7835 | 0.7835 |
| 0.515 | 1.2 | 600 | 0.5745 | 0.8015 | 0.8015 | 0.8015 | 0.8015 |
| 0.4318 | 1.4 | 700 | 0.5061 | 0.8200 | 0.8200 | 0.8200 | 0.8200 |
| 0.4299 | 1.6 | 800 | 0.5239 | 0.8075 | 0.8075 | 0.8075 | 0.8075 |
| 0.4793 | 1.8 | 900 | 0.5366 | 0.8125 | 0.8125 | 0.8125 | 0.8125 |
| 0.4202 | 2.0 | 1000 | 0.4882 | 0.8244 | 0.8244 | 0.8244 | 0.8244 |
| 0.2105 | 2.2 | 1100 | 0.5330 | 0.8234 | 0.8234 | 0.8234 | 0.8234 |
| 0.2597 | 2.4 | 1200 | 0.4604 | 0.8369 | 0.8369 | 0.8369 | 0.8369 |
| 0.2261 | 2.59 | 1300 | 0.4893 | 0.8409 | 0.8409 | 0.8409 | 0.8409 |
| 0.1853 | 2.79 | 1400 | 0.4793 | 0.8494 | 0.8494 | 0.8494 | 0.8494 |
| 0.1739 | 2.99 | 1500 | 0.4392 | 0.8529 | 0.8529 | 0.8529 | 0.8529 |
| 0.0629 | 3.19 | 1600 | 0.4941 | 0.8584 | 0.8584 | 0.8584 | 0.8584 |
| 0.0802 | 3.39 | 1700 | 0.4974 | 0.8613 | 0.8613 | 0.8613 | 0.8613 |
| 0.0712 | 3.59 | 1800 | 0.5416 | 0.8594 | 0.8594 | 0.8594 | 0.8594 |
| 0.0365 | 3.79 | 1900 | 0.5318 | 0.8574 | 0.8574 | 0.8574 | 0.8574 |
| 0.0591 | 3.99 | 2000 | 0.5344 | 0.8574 | 0.8574 | 0.8574 | 0.8574 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
melowhy/whyde
|
melowhy
| 2023-06-25T09:15:10Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-06-25T09:15:10Z |
---
license: bigscience-openrail-m
---
|
Ellbendls/Pixelcopter-PLE-v0
|
Ellbendls
| 2023-06-25T09:09:39Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-23T12:37:16Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 62.70 +/- 42.68
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
RoundtTble/dinov2_vits14_onnx
|
RoundtTble
| 2023-06-25T08:20:24Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2023-06-24T07:10:50Z |
# dinov2_vits14
## ONNX Model
Check this [PR](https://github.com/facebookresearch/dinov2/pull/129).
## Run
Run triton container.
```
make triton
```
```
docker logs dinov2_vits14_triton
=============================
== Triton Inference Server ==
=============================
NVIDIA Release 23.04 (build 58408265)
Triton Server Version 2.33.0
Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
WARNING: CUDA Minor Version Compatibility mode ENABLED.
Using driver version 525.105.17 which has support for CUDA 12.0. This container
was built with CUDA 12.1 and will be run in Minor Version Compatibility mode.
CUDA Forward Compatibility is preferred over Minor Version Compatibility for use
with this container but was unavailable:
[[Forward compatibility was attempted on non supported HW (CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE) cuInit()=804]]
See https://docs.nvidia.com/deploy/cuda-compatibility/ for details.
I0625 08:05:36.712010 1 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7f6c46000000' with size 268435456
I0625 08:05:36.712625 1 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0625 08:05:36.717785 1 model_lifecycle.cc:459] loading: dinov2_vits14:1
I0625 08:05:36.723707 1 onnxruntime.cc:2504] TRITONBACKEND_Initialize: onnxruntime
I0625 08:05:36.723725 1 onnxruntime.cc:2514] Triton TRITONBACKEND API version: 1.12
I0625 08:05:36.723731 1 onnxruntime.cc:2520] 'onnxruntime' TRITONBACKEND API version: 1.12
I0625 08:05:36.723735 1 onnxruntime.cc:2550] backend configuration:
{"cmdline":{"auto-complete-config":"true","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"6.000000","default-max-batch-size":"4"}}
I0625 08:05:36.770311 1 onnxruntime.cc:2608] TRITONBACKEND_ModelInitialize: dinov2_vits14 (version 1)
I0625 08:05:36.770781 1 onnxruntime.cc:666] skipping model configuration auto-complete for 'dinov2_vits14': inputs and outputs already specified
I0625 08:05:36.771205 1 onnxruntime.cc:2651] TRITONBACKEND_ModelInstanceInitialize: dinov2_vits14_0 (GPU device 0)
2023-06-25 08:05:37.157976034 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 465, index: 122, mask: {125, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.158142138 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 466, index: 123, mask: {62, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.158159030 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 467, index: 124, mask: {126, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.158174259 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 468, index: 125, mask: {63, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.165944431 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 344, index: 1, mask: {1, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.158230084 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 469, index: 126, mask: {127, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.169979079 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 347, index: 4, mask: {66, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.169927531 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 345, index: 2, mask: {65, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.169954703 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 346, index: 3, mask: {2, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.173982388 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 350, index: 7, mask: {4, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.173929448 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 348, index: 5, mask: {3, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.173954065 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 349, index: 6, mask: {67, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.181926759 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 351, index: 8, mask: {68, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.185932583 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 352, index: 9, mask: {5, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.189924821 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 353, index: 10, mask: {69, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.193940975 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 464, index: 121, mask: {61, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.194020786 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 357, index: 14, mask: {71, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.193940915 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 354, index: 11, mask: {6, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.193968147 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 355, index: 12, mask: {70, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.193992072 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 356, index: 13, mask: {7, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.197974211 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 360, index: 17, mask: {9, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.197928554 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 358, index: 15, mask: {8, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.197950686 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 359, index: 16, mask: {72, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.201924259 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 361, index: 18, mask: {73, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.205931957 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 362, index: 19, mask: {10, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.209926179 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 363, index: 20, mask: {74, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.213927705 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 364, index: 21, mask: {11, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.217799496 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 365, index: 22, mask: {75, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.217849460 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 366, index: 23, mask: {12, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.221966294 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 367, index: 24, mask: {76, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.221966304 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 463, index: 120, mask: {124, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.225931100 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 462, index: 119, mask: {60, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.225933645 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 368, index: 25, mask: {13, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.229929350 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 369, index: 26, mask: {77, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.233930445 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 370, index: 27, mask: {14, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.233930525 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 461, index: 118, mask: {123, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.237930518 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 371, index: 28, mask: {78, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.241927085 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 372, index: 29, mask: {15, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.245926977 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 373, index: 30, mask: {79, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.249931199 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 374, index: 31, mask: {16, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.253927515 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 375, index: 32, mask: {80, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.257925694 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 376, index: 33, mask: {17, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.261929715 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 377, index: 34, mask: {81, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.265966397 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 378, index: 35, mask: {18, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.269926725 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 379, index: 36, mask: {82, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.273931337 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 380, index: 37, mask: {19, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.281941021 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 381, index: 38, mask: {83, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.282017776 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 398, index: 55, mask: {28, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.282038465 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 382, index: 39, mask: {20, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.282090914 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 383, index: 40, mask: {84, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.286235010 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 385, index: 42, mask: {85, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.285955121 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 401, index: 58, mask: {93, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.282070957 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 399, index: 56, mask: {92, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.286082321 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 384, index: 41, mask: {21, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.285929422 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 400, index: 57, mask: {29, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.293926803 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 405, index: 62, mask: {95, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.289931018 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 402, index: 59, mask: {30, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.289956767 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 403, index: 60, mask: {94, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.301929004 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 388, index: 45, mask: {23, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.289975973 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 404, index: 61, mask: {31, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.294054945 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 406, index: 63, mask: {32, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.294078880 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 407, index: 64, mask: {96, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.314023441 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 409, index: 66, mask: {97, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.289931068 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 386, index: 43, mask: {22, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.318030297 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 411, index: 68, mask: {98, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.289956797 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 387, index: 44, mask: {86, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.301929014 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 408, index: 65, mask: {33, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.314096058 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 410, index: 67, mask: {34, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.334030890 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 414, index: 71, mask: {36, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.305931271 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 389, index: 46, mask: {87, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.321929038 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 390, index: 47, mask: {24, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.321948134 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 391, index: 48, mask: {88, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.321965006 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 392, index: 49, mask: {25, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.321981437 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 393, index: 50, mask: {89, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.321996396 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 394, index: 51, mask: {26, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322012065 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 395, index: 52, mask: {90, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322026713 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 396, index: 53, mask: {27, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322049907 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 397, index: 54, mask: {91, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322065276 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 460, index: 117, mask: {59, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322080735 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 425, index: 82, mask: {105, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322096315 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 426, index: 83, mask: {42, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322112155 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 427, index: 84, mask: {106, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322127053 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 428, index: 85, mask: {43, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322143324 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 429, index: 86, mask: {107, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322157170 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 430, index: 87, mask: {44, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322173340 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 431, index: 88, mask: {108, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322188569 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 432, index: 89, mask: {45, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322205311 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 433, index: 90, mask: {109, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322219938 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 434, index: 91, mask: {46, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322235177 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 435, index: 92, mask: {110, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322249955 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 436, index: 93, mask: {47, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322267158 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 437, index: 94, mask: {111, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322281345 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 438, index: 95, mask: {48, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322296904 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 439, index: 96, mask: {112, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322312113 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 440, index: 97, mask: {49, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322329005 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 441, index: 98, mask: {113, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322343652 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 442, index: 99, mask: {50, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322359492 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 443, index: 100, mask: {114, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322377907 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 444, index: 101, mask: {51, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322393366 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 445, index: 102, mask: {115, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322408725 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 446, index: 103, mask: {52, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322423233 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 447, index: 104, mask: {116, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322437289 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 448, index: 105, mask: {53, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322453440 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 449, index: 106, mask: {117, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322467697 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 450, index: 107, mask: {54, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322483076 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 451, index: 108, mask: {118, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322496812 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 452, index: 109, mask: {55, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.445929743 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 417, index: 74, mask: {101, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322511880 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 453, index: 110, mask: {119, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322525526 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 454, index: 111, mask: {56, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322541977 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 455, index: 112, mask: {120, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.454013818 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 422, index: 79, mask: {40, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322555663 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 456, index: 113, mask: {57, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.457932126 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 423, index: 80, mask: {104, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322571683 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 457, index: 114, mask: {121, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322585920 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 458, index: 115, mask: {58, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.318158029 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 412, index: 69, mask: {35, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.334163851 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 415, index: 72, mask: {100, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.341919085 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 416, index: 73, mask: {37, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.323408365 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 413, index: 70, mask: {99, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.453923387 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 418, index: 75, mask: {38, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.453947493 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 419, index: 76, mask: {102, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.453965727 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 420, index: 77, mask: {39, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.453991656 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 421, index: 78, mask: {103, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.458087059 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 424, index: 81, mask: {41, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.585007204 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 459, index: 116, mask: {122, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:38.570069572 [W:onnxruntime:, session_state.cc:1136 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2023-06-25 08:05:38.570088387 [W:onnxruntime:, session_state.cc:1138 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
I0625 08:05:39.975559 1 model_lifecycle.cc:694] successfully loaded 'dinov2_vits14' version 1
I0625 08:05:39.975625 1 server.cc:583]
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+
I0625 08:05:39.975662 1 server.cc:610]
+-------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Backend | Path | Config |
+-------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
| onnxruntime | /opt/tritonserver/backends/onnxruntime/libtriton_onnxruntime.so | {"cmdline":{"auto-complete-config":"true","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"6.000000","default-max-batch-size":"4"}} |
+-------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
I0625 08:05:39.975683 1 server.cc:653]
+---------------+---------+--------+
| Model | Version | Status |
+---------------+---------+--------+
| dinov2_vits14 | 1 | READY |
+---------------+---------+--------+
I0625 08:05:39.991510 1 metrics.cc:808] Collecting metrics for GPU 0: NVIDIA GeForce RTX 3090
I0625 08:05:39.992145 1 metrics.cc:701] Collecting CPU metrics
I0625 08:05:39.992360 1 tritonserver.cc:2387]
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Option | Value |
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| server_id | triton |
| server_version | 2.33.0 |
| server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data parameters statistics trace logging |
| model_repository_path[0] | /models |
| model_control_mode | MODE_NONE |
| strict_model_config | 0 |
| rate_limit | OFF |
| pinned_memory_pool_byte_size | 268435456 |
| cuda_memory_pool_byte_size{0} | 67108864 |
| min_supported_compute_capability | 6.0 |
| strict_readiness | 1 |
| exit_timeout | 30 |
| cache_enabled | 0 |
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
I0625 08:05:39.993603 1 grpc_server.cc:2450] Started GRPCInferenceService at 0.0.0.0:8001
I0625 08:05:39.993771 1 http_server.cc:3555] Started HTTPService at 0.0.0.0:8000
I0625 08:05:40.034678 1 http_server.cc:185] Started Metrics Service at 0.0.0.0:8002
```
Perf analyzer `dinov2_vits14`
```
make perf
```
```
docker run --gpus all --rm -it --net host nvcr.io/nvidia/tritonserver:23.04-py3-sdk perf_analyzer -m dinov2_vits14 --percentile=95 -i grpc -u 0.0.0.0:8001 --concurrency-range 16:16 --shape input:3,280,280
=================================
== Triton Inference Server SDK ==
=================================
NVIDIA Release 23.04 (build 58408269)
Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
WARNING: CUDA Minor Version Compatibility mode ENABLED.
Using driver version 525.105.17 which has support for CUDA 12.0. This container
was built with CUDA 12.1 and will be run in Minor Version Compatibility mode.
CUDA Forward Compatibility is preferred over Minor Version Compatibility for use
with this container but was unavailable:
[[Forward compatibility was attempted on non supported HW (CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE) cuInit()=804]]
See https://docs.nvidia.com/deploy/cuda-compatibility/ for details.
*** Measurement Settings ***
Batch size: 1
Service Kind: Triton
Using "time_windows" mode for stabilization
Measurement window: 5000 msec
Latency limit: 0 msec
Concurrency limit: 16 concurrent requests
Using synchronous calls for inference
Stabilizing using p95 latency
Request concurrency: 16
Client:
Request count: 9403
Throughput: 522.33 infer/sec
p50 latency: 30482 usec
p90 latency: 32100 usec
p95 latency: 32564 usec
p99 latency: 34203 usec
Avg gRPC time: 30589 usec ((un)marshal request/response 93 usec + response wait 30496 usec)
Server:
Inference count: 9403
Execution count: 1177
Successful request count: 9403
Avg request latency: 24295 usec (overhead 220 usec + queue 9042 usec + compute input 1511 usec + compute infer 13485 usec + compute output 37 usec)
Inferences/Second vs. Client p95 Batch Latency
Concurrency: 16, throughput: 522.33 infer/sec, latency 32564 usec
```
|
joohwan/whisper-small-gd
|
joohwan
| 2023-06-25T08:10:27Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-25T05:51:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-gd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-gd
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1180
- Wer: 14.2298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0723 | 0.09 | 250 | 0.2013 | 22.6924 |
| 0.044 | 0.18 | 500 | 0.1826 | 27.3905 |
| 0.1209 | 0.27 | 750 | 0.1705 | 27.2700 |
| 0.0973 | 0.36 | 1000 | 0.1462 | 15.1182 |
| 0.0941 | 0.45 | 1250 | 0.1322 | 15.6603 |
| 0.076 | 0.54 | 1500 | 0.1258 | 18.3557 |
| 0.0967 | 0.63 | 1750 | 0.1203 | 14.8020 |
| 0.0757 | 0.72 | 2000 | 0.1180 | 14.2298 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Vrushali/model-t5
|
Vrushali
| 2023-06-25T08:01:42Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T07:25:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: model-t5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model-t5
This model is a fine-tuned version of [Vrushali/model-t5](https://huggingface.co/Vrushali/model-t5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 38 | 0.0016 |
| No log | 2.0 | 76 | 0.0000 |
| No log | 3.0 | 114 | 0.0000 |
| No log | 4.0 | 152 | 0.0000 |
| No log | 5.0 | 190 | 0.0000 |
| No log | 6.0 | 228 | 0.0000 |
| No log | 7.0 | 266 | 0.0000 |
| No log | 8.0 | 304 | 0.0000 |
| No log | 9.0 | 342 | 0.0000 |
| No log | 10.0 | 380 | 0.0000 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Lajonbot/stablelm-base-alpha-3b-instruct-pl-lora
|
Lajonbot
| 2023-06-25T07:37:23Z | 0 | 0 | null |
[
"tensorboard",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:openrail",
"region:us"
] | null | 2023-06-15T06:13:44Z |
---
license: openrail
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
language:
- pl
---
|
Lajonbot/polish-gpt2-small-instruct
|
Lajonbot
| 2023-06-25T07:36:40Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-20T19:33:30Z |
---
license: openrail
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
language:
- pl
---
|
Davlan/xlm-roberta-base-wikiann-ner
|
Davlan
| 2023-06-25T07:32:38Z | 158 | 6 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- ar
- as
- bn
- ca
- en
- es
- eu
- fr
- gu
- hi
- id
- ig
- mr
- pa
- pt
- sw
- ur
- vi
- yo
- zh
- multilingual
datasets:
- wikiann
---
# xlm-roberta-base-wikiann-ner
## Model description
**xlm-roberta-base-wikiann-ner** is the first **Named Entity Recognition** model for 20 languages (Arabic, Assamese, Bengali, Catalan, English, Spanish, Basque, French, Gujarati, Hindi, Indonesia, Igbo, Marathi, Punjabi, Portugues and Swahili, Urdu, Vietnamese, Yoruba, Chinese) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of languages datasets obtained from [WikiANN](https://huggingface.co/datasets/wikiann) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base-wikiann-ner")
model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-base-wikiann-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Ìbọn ń ró kù kù gẹ́gẹ́ bí ọwọ́ ọ̀pọ̀ aráàlù ṣe tẹ ìbọn ní Kyiv láti dojú kọ Russia"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 20 NER datasets (Arabic, Assamese, Bengali, Catalan, English, Spanish, Basque, French, Gujarati, Hindi, Indonesia, Igbo, Marathi, Punjabi, Portugues and Swahili, Urdu, Vietnamese, Yoruba, Chinese)[wikiann](https://huggingface.co/datasets/wikiann).
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
### BibTeX entry and citation info
```
|
Davlan/xlm-roberta-base-finetuned-arabic
|
Davlan
| 2023-06-25T07:14:04Z | 2,187 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-25T19:06:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: ar_xlmr-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ar_xlmr-base
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.7.1+cu110
- Datasets 1.16.1
- Tokenizers 0.12.1
|
Davlan/xlm-roberta-large-finetuned-igbo
|
Davlan
| 2023-06-25T07:13:52Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-25T18:59:29Z |
---
tags:
- generated_from_trainer
model-index:
- name: ibo_xlmr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ibo_xlmr
This model is a fine-tuned version of [models/ibo_xlmr/](https://huggingface.co/models/ibo_xlmr/) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9762
- eval_runtime: 31.9667
- eval_samples_per_second: 32.471
- eval_steps_per_second: 4.067
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.21.1
- Pytorch 1.7.1+cu110
- Datasets 1.16.1
- Tokenizers 0.12.1
|
Davlan/byt5-base-eng-yor-mt
|
Davlan
| 2023-06-25T07:13:35Z | 147 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"arxiv:2103.08647",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# byt5-base-eng-yor-mt
## Model description
**byt5-base-eng-yor-mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned byt5-base model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
Specifically, this model is a *byt5-base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning byt5-base achieves **12.23 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 9.82
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-english
|
Davlan
| 2023-06-25T07:13:11Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
---
|
Davlan/xlm-roberta-large-masakhaner
|
Davlan
| 2023-06-25T07:12:21Z | 135 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm-roberta",
"token-classification",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- amh
- hau
- ibo
- kin
- lug
- luo
- pcm
- swa
- wol
- yor
- multilingual
datasets:
- masakhaner
---
# xlm-roberta-large-masakhaner
## Model description
**xlm-roberta-large-masakhaner** is the first **Named Entity Recognition** model for 10 African languages (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-large-masakhaner")
model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-large-masakhaner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 10 African NER datasets (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus.
## Eval results on Test set (F-score)
language|F1-score
-|-
amh |75.76
hau |91.75
ibo |86.26
kin |76.38
lug |84.64
luo |80.65
pcm |89.55
swa |89.48
wol |70.70
yor |82.05
### BibTeX entry and citation info
```
@article{adelani21tacl,
title = {Masakha{NER}: Named Entity Recognition for African Languages},
author = {David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
month = {},
url = {https://arxiv.org/abs/2103.11811},
year = {2021}
}
```
|
NasimB/gpt2-dp-mod-aochild-10chars
|
NasimB
| 2023-06-25T06:53:44Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T03:14:38Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-dp-mod-aochild-10chars
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-dp-mod-aochild-10chars
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7077 | 0.27 | 500 | 5.6423 |
| 5.3468 | 0.54 | 1000 | 5.2154 |
| 5.0042 | 0.8 | 1500 | 4.9608 |
| 4.7637 | 1.07 | 2000 | 4.7969 |
| 4.5583 | 1.34 | 2500 | 4.6931 |
| 4.4721 | 1.61 | 3000 | 4.5939 |
| 4.3855 | 1.88 | 3500 | 4.5049 |
| 4.218 | 2.15 | 4000 | 4.4679 |
| 4.1202 | 2.41 | 4500 | 4.4175 |
| 4.105 | 2.68 | 5000 | 4.3697 |
| 4.0733 | 2.95 | 5500 | 4.3257 |
| 3.8601 | 3.22 | 6000 | 4.3344 |
| 3.8504 | 3.49 | 6500 | 4.3033 |
| 3.8507 | 3.76 | 7000 | 4.2759 |
| 3.8215 | 4.02 | 7500 | 4.2709 |
| 3.5828 | 4.29 | 8000 | 4.2887 |
| 3.6183 | 4.56 | 8500 | 4.2711 |
| 3.6264 | 4.83 | 9000 | 4.2489 |
| 3.5136 | 5.1 | 9500 | 4.2794 |
| 3.3547 | 5.36 | 10000 | 4.2895 |
| 3.383 | 5.63 | 10500 | 4.2727 |
| 3.3982 | 5.9 | 11000 | 4.2594 |
| 3.2002 | 6.17 | 11500 | 4.3133 |
| 3.1199 | 6.44 | 12000 | 4.3184 |
| 3.1483 | 6.71 | 12500 | 4.3123 |
| 3.1516 | 6.97 | 13000 | 4.3013 |
| 2.9083 | 7.24 | 13500 | 4.3587 |
| 2.9076 | 7.51 | 14000 | 4.3641 |
| 2.9176 | 7.78 | 14500 | 4.3616 |
| 2.8855 | 8.05 | 15000 | 4.3806 |
| 2.7292 | 8.32 | 15500 | 4.3978 |
| 2.7443 | 8.58 | 16000 | 4.4023 |
| 2.7445 | 8.85 | 16500 | 4.4046 |
| 2.702 | 9.12 | 17000 | 4.4125 |
| 2.6515 | 9.39 | 17500 | 4.4159 |
| 2.6552 | 9.66 | 18000 | 4.4170 |
| 2.6529 | 9.92 | 18500 | 4.4173 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
weykon/text-to-svg-cat
|
weykon
| 2023-06-25T06:43:11Z | 0 | 0 | null |
[
"license:wtfpl",
"region:us"
] | null | 2023-06-25T06:20:24Z |
---
license: wtfpl
---
Note that if your files are larger than 5GB you’ll also need to run:
请注意,如果您的文件大于 5GB,您还需要运行:
Copied
huggingface-cli lfs-enable-largefiles .
## git push problem
set git push url to https://USER:TOKEN@huggingface.co/~~~~~~~
|
zanafi/sentiment_model
|
zanafi
| 2023-06-25T06:31:04Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:indonlu",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-23T06:53:10Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
config: emot
split: validation
args: emot
metrics:
- name: Accuracy
type: accuracy
value: 0.7363636363636363
- name: Precision
type: precision
value: 0.7397155596092384
- name: Recall
type: recall
value: 0.7459489407651173
- name: F1
type: f1
value: 0.741920437379511
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_model
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7788
- Accuracy: 0.7364
- Precision: 0.7397
- Recall: 0.7459
- F1: 0.7419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.1939 | 1.0 | 221 | 0.8261 | 0.6932 | 0.7203 | 0.7034 | 0.7056 |
| 0.6866 | 2.0 | 442 | 0.7925 | 0.725 | 0.7378 | 0.7377 | 0.7346 |
| 0.4791 | 3.0 | 663 | 0.7788 | 0.7364 | 0.7397 | 0.7459 | 0.7419 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sukantan/all-mpnet-base-v2-ftlegal-v3
|
sukantan
| 2023-06-25T06:20:52Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"dataset:sukantan/nyaya-st-training",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-25T06:20:46Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
datasets:
- sukantan/nyaya-st-training
---
# sukantan/all-mpnet-base-v2-ftlegal-v3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sukantan/all-mpnet-base-v2-ftlegal-v3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sukantan/all-mpnet-base-v2-ftlegal-v3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 391 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MegaBatchMarginLoss.MegaBatchMarginLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 391,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
nolanaatama/mlycrsrvc750pchsvrs
|
nolanaatama
| 2023-06-25T05:19:58Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-25T04:47:27Z |
---
license: creativeml-openrail-m
---
|
Gayathri142214002/t5_qg_1
|
Gayathri142214002
| 2023-06-25T04:58:01Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T04:53:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5_qg_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_qg_1
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.658 | 0.69 | 10 | 1.9854 |
| 1.7442 | 1.38 | 20 | 1.6146 |
| 1.3456 | 2.07 | 30 | 1.3937 |
| 0.9931 | 2.76 | 40 | 1.2447 |
| 0.9253 | 3.45 | 50 | 1.1519 |
| 0.7154 | 4.14 | 60 | 1.0958 |
| 0.6624 | 4.83 | 70 | 1.0645 |
| 0.6384 | 5.52 | 80 | 1.0412 |
| 0.4889 | 6.21 | 90 | 1.0323 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ardhies/CuteAsianFace
|
ardhies
| 2023-06-25T04:10:18Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-25T04:06:53Z |
---
license: creativeml-openrail-m
---
|
Laurie/baichuan-7b-qlora-moss
|
Laurie
| 2023-06-25T04:06:12Z | 5 | 0 |
peft
|
[
"peft",
"text-generation",
"zh",
"en",
"dataset:fnlp/moss-003-sft-data",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-06-25T03:38:18Z |
---
library_name: peft
license: apache-2.0
datasets:
- fnlp/moss-003-sft-data
language:
- zh
- en
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
## 使用方法
git clone https://huggingface.co/Laurie/baichuan-7b-qlora-moss
cd baichuan-7b-qlora-moss
python src/web_demo.py \
--model_name_or_path baichuan-inc/baichuan-7B \
--checkpoint_dir .
|
andrewromitti/alzheimer_model_aug_deit5
|
andrewromitti
| 2023-06-25T03:58:45Z | 193 | 1 |
transformers
|
[
"transformers",
"pytorch",
"deit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T02:14:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: alzheimer_model_aug_deit5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9996875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alzheimer_model_aug_deit5
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0012
- Accuracy: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 1234
- gradient_accumulation_steps: 10
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5045 | 1.0 | 212 | 0.1414 | 0.9522 |
| 0.0779 | 2.0 | 424 | 0.0222 | 0.9961 |
| 0.0156 | 3.0 | 637 | 0.0164 | 0.9941 |
| 0.0032 | 4.0 | 849 | 0.0044 | 0.9983 |
| 0.0004 | 4.99 | 1060 | 0.0012 | 0.9997 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
CJacobnriia/spatnzRVC
|
CJacobnriia
| 2023-06-25T03:56:17Z | 0 | 0 | null |
[
"en",
"region:us"
] | null | 2023-06-25T01:52:32Z |
---
language:
- en
---
This is an RVC model of spatnz (https://www.youtube.com/channel/UCcNPbOeFo-qM0wpis8Lwdig)

|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.