modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-29 12:28:39
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
526 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-29 12:28:30
card
stringlengths
11
1.01M
joheras/longformer-base-4096-bne-es-finetuned-clinais
joheras
2023-05-03T06:45:27Z
93
0
transformers
[ "transformers", "pytorch", "longformer", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-04-28T14:56:31Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: longformer-base-4096-bne-es-finetuned-clinais results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # longformer-base-4096-bne-es-finetuned-clinais This model is a fine-tuned version of [PlanTL-GOB-ES/longformer-base-4096-bne-es](https://huggingface.co/PlanTL-GOB-ES/longformer-base-4096-bne-es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7533 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2298 | 1.0 | 102 | 1.8098 | | 2.0755 | 2.0 | 204 | 1.8368 | | 2.0273 | 3.0 | 306 | 1.7774 | | 1.9818 | 4.0 | 408 | 1.7263 | | 1.9824 | 5.0 | 510 | 1.7474 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
hollowstrawberry/multicharloras
hollowstrawberry
2023-05-03T06:36:36Z
0
7
diffusers
[ "diffusers", "lora", "characters", "anime", "text-to-image", "en", "license:wtfpl", "region:us" ]
text-to-image
2023-05-03T06:01:19Z
--- license: wtfpl language: - en library_name: diffusers pipeline_tag: text-to-image tags: - lora - characters - anime --- # Multi-Character Loras I made these. Enjoy. If you like my work please consider [leaving me a tip](https://ko-fi.com/holostrawberry) :) You might wonder why all of these are schoolgirls. Well they're marketable and uniforms are easy for Stable Diffusion. &nbsp; ### DDLC / Doki Doki Literature Club * Monika * Sayori * Natsuki * Yuri ![](Doki%20Doki%20Literature%20Club/ddlc.png) ### Love is War / Kaguya-sama wa Kokurasetai * Miko Iino * Chika Fujiwara * Kaguya Shinomiya * Ai Hayasaka * Kei Shirogane ![](Love%20is%20War/loveiswar.png) ### The Quintessential Quintuplets / Go-Tōbun no Hanayome * Ichika Nakano * Nino Nakano * Miku Nakano * Yotsuba Nakano * Itsuki Nakano ![](The%20Quintessential%20Quintuplets/quintuplets.png) ### Rent-A-Girlfriend / Kanojo, Okarishimasu * Mami Nanami * Chizuru Ichinose / Chizuru Mizuhara * Ruka Sarashina * Sumi Sakurasawa ![](Rent-A-Girlfriend/rentagf.png)
TrajanovRisto/en_ner_esg
TrajanovRisto
2023-05-03T06:30:21Z
1
0
spacy
[ "spacy", "token-classification", "en", "model-index", "region:us" ]
token-classification
2023-05-03T06:28:53Z
--- tags: - spacy - token-classification language: - en model-index: - name: en_ner_esg results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.95 - name: NER Recall type: recall value: 0.9245742092 - name: NER F Score type: f_score value: 0.9371146732 --- | Feature | Description | | --- | --- | | **Name** | `en_ner_esg` | | **Version** | `0.0.0` | | **spaCy** | `>=3.5.2,<3.6.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (3 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `Environmental`, `Governance`, `Social` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 93.71 | | `ENTS_P` | 95.00 | | `ENTS_R` | 92.46 | | `TRANSFORMER_LOSS` | 3213.34 | | `NER_LOSS` | 2096.88 |
Buseak/canine_deasciifier_0305
Buseak
2023-05-03T06:24:52Z
714
0
transformers
[ "transformers", "pytorch", "tensorboard", "canine", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-03T05:48:53Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: canine_deasciifier_0305 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # canine_deasciifier_0305 This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0007 - Precision: 0.9978 - Recall: 0.9983 - F1: 0.9981 - Accuracy: 0.9998 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 244 | 0.0672 | 0.7433 | 0.8204 | 0.7800 | 0.9735 | | No log | 2.0 | 488 | 0.0445 | 0.8205 | 0.8889 | 0.8533 | 0.9832 | | 0.1401 | 3.0 | 732 | 0.0256 | 0.8950 | 0.9259 | 0.9102 | 0.9906 | | 0.1401 | 4.0 | 976 | 0.0165 | 0.9384 | 0.9483 | 0.9433 | 0.9943 | | 0.0364 | 5.0 | 1220 | 0.0112 | 0.9597 | 0.9629 | 0.9613 | 0.9962 | | 0.0364 | 6.0 | 1464 | 0.0089 | 0.9656 | 0.9736 | 0.9696 | 0.9970 | | 0.021 | 7.0 | 1708 | 0.0073 | 0.9710 | 0.9797 | 0.9753 | 0.9976 | | 0.021 | 8.0 | 1952 | 0.0060 | 0.9740 | 0.9838 | 0.9789 | 0.9980 | | 0.0139 | 9.0 | 2196 | 0.0038 | 0.9856 | 0.9890 | 0.9873 | 0.9988 | | 0.0139 | 10.0 | 2440 | 0.0030 | 0.9893 | 0.9912 | 0.9903 | 0.9991 | | 0.01 | 11.0 | 2684 | 0.0024 | 0.9916 | 0.9932 | 0.9924 | 0.9993 | | 0.01 | 12.0 | 2928 | 0.0021 | 0.9919 | 0.9941 | 0.9930 | 0.9993 | | 0.0072 | 13.0 | 3172 | 0.0018 | 0.9938 | 0.9957 | 0.9947 | 0.9995 | | 0.0072 | 14.0 | 3416 | 0.0016 | 0.9940 | 0.9958 | 0.9949 | 0.9995 | | 0.0056 | 15.0 | 3660 | 0.0012 | 0.9955 | 0.9968 | 0.9962 | 0.9996 | | 0.0056 | 16.0 | 3904 | 0.0012 | 0.9954 | 0.9969 | 0.9962 | 0.9996 | | 0.0045 | 17.0 | 4148 | 0.0008 | 0.9975 | 0.9979 | 0.9977 | 0.9998 | | 0.0045 | 18.0 | 4392 | 0.0008 | 0.9975 | 0.9981 | 0.9978 | 0.9998 | | 0.0039 | 19.0 | 4636 | 0.0008 | 0.9974 | 0.9981 | 0.9977 | 0.9998 | | 0.0039 | 20.0 | 4880 | 0.0007 | 0.9978 | 0.9983 | 0.9981 | 0.9998 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
tanfiona/cnc-v2-st1-csc
tanfiona
2023-05-03T06:04:36Z
110
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "en", "license:unknown", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-02T06:06:58Z
--- language: en license: unknown widget: - text: "She fell because he pushed her." example_title: "Causal Example 1" - text: "He pushed her, causing her to fall." example_title: "Causal Example 2" - text: "She fell onto him." example_title: "Non-causal Example 1" - text: "He is Billy and he pushed her." example_title: "Non-causal Example 2" --- Binary causal sentence classification: * LABEL_0 = Non-causal * LABEL_1 = Causal Trained on Causal News Corpus Version 2. For more information, please refer to our repository: https://github.com/tanfiona/CausalNewsCorpus
mattjmattj/HF_RL_unit2_q_FrozenLake
mattjmattj
2023-05-03T05:56:04Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-03T05:55:59Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="mattjmattj/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
cyrodw/dqn-SpaceInvadersNoFrameskip-v4
cyrodw
2023-05-03T05:55:29Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-03T04:17:32Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 635.00 +/- 310.41 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga cyrodw -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga cyrodw -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga cyrodw ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
cha00/a2c-PandaReachDense-v2
cha00
2023-05-03T05:41:56Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-03T05:39:31Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.88 +/- 0.70 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Ibrahim-Alam/finetuning-sentiment-model-3000-samples
Ibrahim-Alam
2023-05-03T05:36:08Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-03T04:37:27Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.961 - name: F1 type: f1 value: 0.9607514256960751 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.1232 - Accuracy: 0.961 - F1: 0.9608 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Felix555/dqn-SpaceInvadersNoFrameskip-v4
Felix555
2023-05-03T05:29:50Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-02T06:18:26Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 580.50 +/- 192.75 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Felix555 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Felix555 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Felix555 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Gayathri142214002/t5-QG-2
Gayathri142214002
2023-05-03T05:08:35Z
161
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-03T04:59:50Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-QG-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-QG-2 This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8743 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4196 | 0.4 | 10 | 2.2206 | | 1.8176 | 0.81 | 20 | 1.6716 | | 1.4509 | 1.21 | 30 | 1.3951 | | 1.1259 | 1.62 | 40 | 1.2469 | | 0.8528 | 2.02 | 50 | 1.1312 | | 0.8001 | 2.42 | 60 | 1.0782 | | 0.7021 | 2.83 | 70 | 1.0161 | | 0.6407 | 3.23 | 80 | 0.9279 | | 0.5636 | 3.64 | 90 | 0.8877 | | 0.633 | 4.04 | 100 | 0.8979 | | 0.4598 | 4.44 | 110 | 0.8909 | | 0.5543 | 4.85 | 120 | 0.8881 | | 0.4215 | 5.25 | 130 | 0.8954 | | 0.517 | 5.66 | 140 | 0.8753 | | 0.4442 | 6.06 | 150 | 0.8747 | | 0.4141 | 6.46 | 160 | 0.8743 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Hius/DreamFul-V2
Hius
2023-05-03T04:49:40Z
162
1
diffusers
[ "diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-04-01T11:23:35Z
--- language: - en library_name: diffusers license: creativeml-openrail-m pipeline_tag: text-to-image --- Model mix aims to create the most realistic and natural images possible. It's currently in the testing process, so please comment. Available on Sinkin.ai with GPU acceleration. MY MODELS WILL ALWAYS BE FREE. https://sinkin.ai/m/DreamFul https://www.mage.space/u/hius Guide: For the settings or parameters, I recommend using these settings. Sampler: DPM++ SDE Karras or Ruler a Steps: 30-50 CFG Scale: 7.5 How to use: Structure: render for a `+ <subject> ++ <details> + <lights> + <color> + <resolution> + <option> ` For example: render for a girl, beautiful face, autumn lights,pastel colors, high quality, trending on ArtStation, trending on CGSociety,(extremely detailed CG unity 8k wallpaper) Negative Prompt: illustration, painting, cartoons, sketch, (worst quality:2), (low quality:2), (normal quality:2), lowres, bad anatomy, bad hands, ((monochrome)), ((grayscale)), collapsed eyeshadow, multiple eyeblows, vaginas in breasts, (cropped), oversaturated, extra limb, missing limbs, deformed hands, long neck, long body, imperfect, (bad hands), signature, watermark, username, artist name, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, unnatural face, unnatural body, error Can you customize it all to your liking and show me it? Thank you!!! LORA is not added yet
danushaaditya/distilbert-base-uncased-finetuned-emotion
danushaaditya
2023-05-03T04:45:12Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-04-18T15:30:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.919 - name: F1 type: f1 value: 0.9191245777780953 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2272 - Accuracy: 0.919 - F1: 0.9191 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8167 | 1.0 | 250 | 0.3223 | 0.9025 | 0.8991 | | 0.2503 | 2.0 | 500 | 0.2272 | 0.919 | 0.9191 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
KigenCHESS/eng-sw_translation2
KigenCHESS
2023-05-03T04:38:46Z
63
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-03T04:36:23Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: KigenCHESS/eng-sw_translation2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # KigenCHESS/eng-sw_translation2 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-sw](https://huggingface.co/Helsinki-NLP/opus-mt-en-sw) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4877 - Validation Loss: 0.6694 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 424, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.5569 | 0.6694 | 0 | | 0.4877 | 0.6694 | 1 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e-586e0b
rithwik-db
2023-05-03T04:32:04Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-05-03T04:31:58Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e-586e0b This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e-586e0b') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e-586e0b') model = AutoModel.from_pretrained('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e-586e0b') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e-586e0b) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7200 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
wooii/q-FrozenLake-v1-4x4-noSlippery
wooii
2023-05-03T04:06:08Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-03T04:06:05Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="wooii/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
san9hyun/distilbert-base-uncased-finetuned-emotion
san9hyun
2023-05-03T03:49:51Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-02T08:58:20Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.9261829410176015 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2115 - Accuracy: 0.926 - F1: 0.9262 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.813 | 1.0 | 250 | 0.2984 | 0.909 | 0.9063 | | 0.2385 | 2.0 | 500 | 0.2115 | 0.926 | 0.9262 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
microsoft/tapex-base
microsoft
2023-05-03T03:48:52Z
952
41
transformers
[ "transformers", "pytorch", "safetensors", "bart", "text2text-generation", "tapex", "table-question-answering", "en", "arxiv:2107.07653", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
table-question-answering
2022-03-02T23:29:05Z
--- language: en tags: - tapex - table-question-answering license: mit --- # TAPEX (base-sized model) TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining). ## Model description TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries. TAPEX is based on the BART architecture, the transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. ## Intended Uses You can use the raw model for simulating neural SQL execution, i.e., employ TAPEX to execute a SQL query on a given table. However, the model is mostly meant to be fine-tuned on a supervised dataset. Currently TAPEX can be fine-tuned to tackle table question answering tasks and table fact verification tasks. See the [model hub](https://huggingface.co/models?search=tapex) to look for fine-tuned versions on a task that interests you. ### How to Use Here is how to use this model in transformers: ```python from transformers import TapexTokenizer, BartForConditionalGeneration import pandas as pd tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-base") model = BartForConditionalGeneration.from_pretrained("microsoft/tapex-base") data = { "year": [1896, 1900, 1904, 2004, 2008, 2012], "city": ["athens", "paris", "st. louis", "athens", "beijing", "london"] } table = pd.DataFrame.from_dict(data) # tapex accepts uncased input since it is pre-trained on the uncased corpus query = "select year where city = beijing" encoding = tokenizer(table=table, query=query, return_tensors="pt") outputs = model.generate(**encoding) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) # ['2008'] ``` ### How to Fine-tuning Please find the fine-tuning script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex). ### BibTeX entry and citation info ```bibtex @inproceedings{ liu2022tapex, title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor}, author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou}, booktitle={International Conference on Learning Representations}, year={2022}, url={https://openreview.net/forum?id=O50443AsCP} } ```
microsoft/tapex-large
microsoft
2023-05-03T03:48:31Z
405
9
transformers
[ "transformers", "pytorch", "safetensors", "bart", "text2text-generation", "tapex", "table-question-answering", "en", "arxiv:2107.07653", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
table-question-answering
2022-03-10T04:55:54Z
--- language: en tags: - tapex - table-question-answering license: mit --- # TAPEX (large-sized model) TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining). ## Model description TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries. TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. ## Intended Uses ⚠️ This model checkpoint is **ONLY** used for fine-tuining on downstream tasks, and you **CANNOT** use this model for simulating neural SQL execution, i.e., employ TAPEX to execute a SQL query on a given table. The one that can neurally execute SQL queries is at [here](https://huggingface.co/microsoft/tapex-large-sql-execution). > This separation of two models for two kinds of intention is because of a known issue in BART large, and we recommend readers to see [this comment](https://github.com/huggingface/transformers/issues/15559#issuecomment-1062880564) for more details. ### How to Fine-tuning Please find the fine-tuning script [here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/tapex). ### BibTeX entry and citation info ```bibtex @inproceedings{ liu2022tapex, title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor}, author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou}, booktitle={International Conference on Learning Representations}, year={2022}, url={https://openreview.net/forum?id=O50443AsCP} } ```
hanafuusen2001/AkagiKancolle
hanafuusen2001
2023-05-03T03:32:43Z
0
1
null
[ "lora", "stable diffusion", "kancolle", "akagi", "zh", "tw", "en", "license:creativeml-openrail-m", "region:us" ]
null
2023-04-22T03:11:40Z
--- language: - zh - tw - en tags: - lora - stable diffusion - kancolle - akagi license: creativeml-openrail-m --- <font size=6>**『一航戦赤城、第一次攻撃隊、発艦してください!』**</font> [Civitai LoRA Page](https://civitai.com/models/56057) # 關於 AkagiKancolle 是『艦隊これくしょん -艦これ-』中『赤城』角色的 LoRA 模型。 本 LoRA 的泛化性不錯,可以用在寫實類模型和動漫類模型中,諸如:ChilloutMix、BeautyProMix、ChikMix、AOM2、ReVAnimated 等。 # 使用説明 ## 觸發詞 本 LoRA 的觸發詞是 **iks_akagi, 1girl**。IKS 來自『第一航空戰隊(Dai Ichi Kōkū sentai, Ichikō-sen)』的首字母。 ## 提示詞 必要時,可加入以下提示詞,以便强化赤城的官方服裝(袴、襷、弓掛、胸当て、弓道、着物): - hakama, hakama_short_skirt, hakama_skirt, red_hakama - japanese_clothes, kimono, tasuki, yugake, muneate, kyuudou - thighhighs, white_legwear 1. 在一些模型中,加入提示詞 **akagi_\\\(kancolle\\\)** 能加强角色效果。 2. 使用不同大模型時,需要額外增加的提示詞以達到理想效果,要多嘗試。 3. 只在正面提示詞中加入 **iks_akagi, 1girl** 也能產生赤城角色效果,但通常在動漫類模型更容易出現。 ## 存在的問題 1. 在這個版本的模型中,赤城服裝中的 muneate(胸当て)和 tasuki(襷)有時較難呈現出來,計劃在下個版本中解決。 2. 赤城袴裙前面的飛行甲板挂飾無法穩定顯示,而且飛行甲板上的『ア』字往往無法正確出現。 3. 角色人物可能會手持一些奇怪的物件,例如棍子、竹筒,可能是由於訓練圖集中包含了赤城手持弓的原因。 ## 參數 - LoRA Weight:建議在 0.5 ~ 1.0,其中 0.8 是較好的選擇,使用 BeautyProMix 模型時 LoRA 權重要適當降低。 - Sampling method:DPM++ SDE Karras (選其他都可能導致畫面出現奇怪的紋理) - Sampling steps:20 ~ 30 - CFG Scale:7 ~ 10 - Restore faces:建議選擇 - Clip skip:2 比 1 更好 --- # About AkagiKancolle is the LoRA model of the character "Akagi" from "Kantai Collection -KanColle-". The generalization of this LoRA is good. It can be used in realistic models and animation models, such as: ChilloutMix, BeautyProMix, ChikMix, AOM2, ReVAnimated, etc. # Instructions ## Activation Word The activation word for this LoRA is **"iks_akagi, 1girl"**. IKS comes from the initials of "First Carrier Division (Dai Ichi Kōkū sentai, Ichikō-sen)". ## Prompt If necessary, the following prompts can be added to enhance Akagi's official costumes (袴, 襷, 弓掛, 胸当て, 弓道, 着物): - hakama, hakama_short_skirt, hakama_skirt, red_hakama - japanese_clothes, kimono, tasuki, yugake, muneate, kyuudou - thighhighs, white_legwear 1. In some models, adding the prompt **"akagi_\\\(kancolle\\\)"** can strengthen the role effect. 2. When using different models, additional prompts are needed to achieve the desired effect, so try more. 3. Only adding **iks_akagi, 1girl** to the prompt can also appear Akagi character effect, but usually it is more likely to appear on anime models. ## Existing Problems 1. In this version of the LoRA, the muneate (胸当て) and tasuki (襷) in the Akagi costume are sometimes difficult to render, and it is planned to be resolved in the next version. 2. The flight deck ornaments on the front of the Akagi hakama skirt cannot be displayed stably, and the word "ア" on the flight deck often does not appear correctly. 3. The character may hold some strange objects, such as sticks and bamboo tubes, which may be due to the fact that Akagi is holding a bow in the training images. ## Parameters - LoRA Weight: It is recommended to be 0.5 ~ 1.0, among which 0.8 is a better choice, and the LoRA weight should be appropriately reduced when using the BeautyProMix model. - Sampling method: DPM++ SDE Karras (choosing others may cause strange textures in the picture) - Sampling steps: 20 ~ 30 - CFG Scale: 7 ~ 10 - Restore faces: recommended choice - Clip skip: 2 is better than 1 --- # 預覽 Previews <img src="https://s3.amazonaws.com/moonup/production/uploads/6403461756038547951ef21f/SOK-BFI_lqRjF3-HfHL7V.png" width="512" height=""> <img src="https://s3.amazonaws.com/moonup/production/uploads/6403461756038547951ef21f/Bv7l8VxyWe-MZXf2EpXiJ.png" width="512" height=""> <img src="https://s3.amazonaws.com/moonup/production/uploads/6403461756038547951ef21f/LKQGUf46-Z4H2kkA2VFk6.png" width="512" height=""> <img src="https://s3.amazonaws.com/moonup/production/uploads/6403461756038547951ef21f/17daUEB-18f1-MCIsjS03.png" width="512" height=""> <img src="https://s3.amazonaws.com/moonup/production/uploads/6403461756038547951ef21f/kqXrou2nveN9aG-0hda2K.png" width="512" height=""> <img src="https://s3.amazonaws.com/moonup/production/uploads/6403461756038547951ef21f/S0-_fYAPdOFSrDHfYCPOA.png" width="512" height=""> <img src="https://s3.amazonaws.com/moonup/production/uploads/6403461756038547951ef21f/y7F5aibBWTZnQQCNW2Blb.png" width="512" height=""> <img src="https://s3.amazonaws.com/moonup/production/uploads/6403461756038547951ef21f/uRlSLu8UFkhCpNlusUFJg.png" width="512" height=""> <img src="https://s3.amazonaws.com/moonup/production/uploads/6403461756038547951ef21f/gflZpq79u_UR3JnHvVa3J.png" width="512" height=""> <img src="https://s3.amazonaws.com/moonup/production/uploads/6403461756038547951ef21f/koOT3k5vpdZy1-LPHULzF.png" width="512" height=""> <img src="https://s3.amazonaws.com/moonup/production/uploads/6403461756038547951ef21f/b9U1SzgQdXHr2QGnbM-9C.png" width="512" height=""> <img src="https://s3.amazonaws.com/moonup/production/uploads/6403461756038547951ef21f/SW3bd0mg4f3EGGDWJ9n2E.png" width="512" height="">
badili/metaasr-1-b-mcv-11-v-1
badili
2023-05-03T03:31:55Z
5
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-04-29T23:05:57Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice_11_0 metrics: - wer model-index: - name: wav2vec2-large-xls-r-1b-swahili-v12 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_11_0 type: common_voice_11_0 config: sw split: test args: sw metrics: - name: Wer type: wer value: 0.20382121671954753 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-1b-swahili-v12 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4658 - Wer: 0.2038 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 2.726 | 0.35 | 400 | 0.7214 | 0.6674 | | 0.5241 | 0.69 | 800 | 0.5641 | 0.5345 | | 0.4616 | 1.04 | 1200 | 0.5112 | 0.4755 | | 0.4018 | 1.39 | 1600 | 0.4797 | 0.4158 | | 0.3916 | 1.74 | 2000 | 0.4483 | 0.3985 | | 0.3661 | 2.08 | 2400 | 0.4449 | 0.3931 | | 0.3314 | 2.43 | 2800 | 0.4124 | 0.3549 | | 0.3287 | 2.78 | 3200 | 0.4008 | 0.3651 | | 0.317 | 3.13 | 3600 | 0.4460 | 0.3735 | | 0.3026 | 3.47 | 4000 | 0.4165 | 0.3753 | | 0.3061 | 3.82 | 4400 | 0.4112 | 0.3550 | | 0.2808 | 4.17 | 4800 | 0.3951 | 0.3275 | | 0.2641 | 4.52 | 5200 | 0.3934 | 0.3340 | | 0.2709 | 4.86 | 5600 | 0.3963 | 0.3287 | | 0.2586 | 5.21 | 6000 | 0.4114 | 0.3396 | | 0.2487 | 5.56 | 6400 | 0.3821 | 0.3214 | | 0.2618 | 5.91 | 6800 | 0.3987 | 0.3268 | | 0.2297 | 6.25 | 7200 | 0.3810 | 0.3132 | | 0.2337 | 6.6 | 7600 | 0.3740 | 0.3131 | | 0.2285 | 6.95 | 8000 | 0.3715 | 0.3093 | | 0.2173 | 7.29 | 8400 | 0.3878 | 0.3147 | | 0.2251 | 7.64 | 8800 | 0.3862 | 0.3134 | | 0.2215 | 7.99 | 9200 | 0.3621 | 0.2940 | | 0.195 | 8.34 | 9600 | 0.3651 | 0.3005 | | 0.201 | 8.68 | 10000 | 0.3837 | 0.3167 | | 0.1964 | 9.03 | 10400 | 0.3719 | 0.2876 | | 0.1741 | 9.38 | 10800 | 0.3637 | 0.2840 | | 0.181 | 9.73 | 11200 | 0.3616 | 0.2914 | | 0.1795 | 10.07 | 11600 | 0.3719 | 0.2753 | | 0.1602 | 10.42 | 12000 | 0.3618 | 0.2856 | | 0.1753 | 10.77 | 12400 | 0.3570 | 0.2788 | | 0.1627 | 11.12 | 12800 | 0.3500 | 0.2719 | | 0.1566 | 11.46 | 13200 | 0.3553 | 0.2808 | | 0.1589 | 11.81 | 13600 | 0.3635 | 0.2699 | | 0.1511 | 12.16 | 14000 | 0.3656 | 0.2692 | | 0.1451 | 12.51 | 14400 | 0.3759 | 0.2759 | | 0.1444 | 12.85 | 14800 | 0.3607 | 0.2677 | | 0.1359 | 13.2 | 15200 | 0.3852 | 0.2660 | | 0.1313 | 13.55 | 15600 | 0.3587 | 0.2679 | | 0.1329 | 13.89 | 16000 | 0.3548 | 0.2584 | | 0.1163 | 14.24 | 16400 | 0.3701 | 0.2535 | | 0.1175 | 14.59 | 16800 | 0.3693 | 0.2638 | | 0.1242 | 14.94 | 17200 | 0.3660 | 0.2565 | | 0.1067 | 15.28 | 17600 | 0.3835 | 0.2581 | | 0.1077 | 15.63 | 18000 | 0.3799 | 0.2504 | | 0.1099 | 15.98 | 18400 | 0.3598 | 0.2478 | | 0.0952 | 16.33 | 18800 | 0.3865 | 0.2563 | | 0.1007 | 16.67 | 19200 | 0.3630 | 0.2565 | | 0.0999 | 17.02 | 19600 | 0.3912 | 0.2505 | | 0.0895 | 17.37 | 20000 | 0.3934 | 0.2631 | | 0.0974 | 17.72 | 20400 | 0.3718 | 0.2462 | | 0.0939 | 18.06 | 20800 | 0.4001 | 0.2587 | | 0.0915 | 18.41 | 21200 | 0.4048 | 0.2468 | | 0.0865 | 18.76 | 21600 | 0.3860 | 0.2415 | | 0.0784 | 19.11 | 22000 | 0.4148 | 0.2454 | | 0.0782 | 19.45 | 22400 | 0.3952 | 0.2471 | | 0.0775 | 19.8 | 22800 | 0.3943 | 0.2434 | | 0.0735 | 20.15 | 23200 | 0.4093 | 0.2405 | | 0.0679 | 20.5 | 23600 | 0.3996 | 0.2362 | | 0.0677 | 20.84 | 24000 | 0.4133 | 0.2365 | | 0.0687 | 21.19 | 24400 | 0.4303 | 0.2330 | | 0.0651 | 21.54 | 24800 | 0.4288 | 0.2326 | | 0.0647 | 21.88 | 25200 | 0.4134 | 0.2347 | | 0.0634 | 22.23 | 25600 | 0.4148 | 0.2312 | | 0.0592 | 22.58 | 26000 | 0.4322 | 0.2315 | | 0.06 | 22.93 | 26400 | 0.4050 | 0.2313 | | 0.0561 | 23.27 | 26800 | 0.4260 | 0.2263 | | 0.0546 | 23.62 | 27200 | 0.4228 | 0.2238 | | 0.0548 | 23.97 | 27600 | 0.4140 | 0.2258 | | 0.0505 | 24.32 | 28000 | 0.4304 | 0.2246 | | 0.0501 | 24.66 | 28400 | 0.4241 | 0.2233 | | 0.0481 | 25.01 | 28800 | 0.4385 | 0.2209 | | 0.0469 | 25.36 | 29200 | 0.4451 | 0.2189 | | 0.0464 | 25.71 | 29600 | 0.4397 | 0.2217 | | 0.0438 | 26.05 | 30000 | 0.4419 | 0.2154 | | 0.0432 | 26.4 | 30400 | 0.4366 | 0.2137 | | 0.0419 | 26.75 | 30800 | 0.4371 | 0.2137 | | 0.0419 | 27.1 | 31200 | 0.4552 | 0.2109 | | 0.0392 | 27.44 | 31600 | 0.4496 | 0.2108 | | 0.0386 | 27.79 | 32000 | 0.4585 | 0.2096 | | 0.0387 | 28.14 | 32400 | 0.4496 | 0.2065 | | 0.0367 | 28.48 | 32800 | 0.4646 | 0.2082 | | 0.0357 | 28.83 | 33200 | 0.4553 | 0.2067 | | 0.0355 | 29.18 | 33600 | 0.4615 | 0.2055 | | 0.0345 | 29.53 | 34000 | 0.4670 | 0.2046 | | 0.0346 | 29.87 | 34400 | 0.4658 | 0.2038 | ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
CorvaeOboro/gen_item_potion
CorvaeOboro
2023-05-03T03:16:26Z
0
0
null
[ "license:cc0-1.0", "region:us" ]
null
2023-05-03T02:00:55Z
--- license: cc0-1.0 --- a stylegan2ada network checkpoint trained on synthetic 1024x1024 images of curated generated potion item images . ![item_potion_process_single](https://raw.githubusercontent.com/CorvaeOboro/gen_item/master/docs/potion/item_potion_process_single.jpg?raw=true "item_potion_process_single") more information here : [https://github.com/CorvaeOboro/gen_item](https://github.com/CorvaeOboro/gen_item).
rickybaker/taxi
rickybaker
2023-05-03T03:10:17Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-03T03:10:15Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="rickybaker/taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
rickybaker/q-FrozenLake-v1-4x4-noSlippery
rickybaker
2023-05-03T03:09:09Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-03T03:09:06Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="rickybaker/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
pablodawson/coatyero
pablodawson
2023-05-03T03:06:22Z
34
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-03T03:03:57Z
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: a women wearing coatyero coat --- ### coatyero Dreambooth model trained by pablodawson with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: a women wearing coatyero coat (use that on your prompt) ![a women wearing coatyero coat 0](https://huggingface.co/pablodawson/coatyero/resolve/main/concept_images/a%20women%20wearing%20coatyero%20coat_%281%29.jpg)![a women wearing coatyero coat 1](https://huggingface.co/pablodawson/coatyero/resolve/main/concept_images/a%20women%20wearing%20coatyero%20coat_%282%29.jpg)![a women wearing coatyero coat 2](https://huggingface.co/pablodawson/coatyero/resolve/main/concept_images/a%20women%20wearing%20coatyero%20coat_%283%29.jpg)![a women wearing coatyero coat 3](https://huggingface.co/pablodawson/coatyero/resolve/main/concept_images/a%20women%20wearing%20coatyero%20coat_%284%29.jpg)![a women wearing coatyero coat 4](https://huggingface.co/pablodawson/coatyero/resolve/main/concept_images/a%20women%20wearing%20coatyero%20coat_%285%29.jpg)
rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e
rithwik-db
2023-05-03T02:57:59Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-05-03T02:57:53Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e') model = AutoModel.from_pretrained('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7200 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
lucasmadda/distilbert-base-uncased-distilled-clinc
lucasmadda
2023-05-03T02:51:51Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-03T02:38:26Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - name: Accuracy type: accuracy value: 0.9493548387096774 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.3288 - Accuracy: 0.9494 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.9476 | 1.0 | 318 | 2.9510 | 0.7468 | | 2.2551 | 2.0 | 636 | 1.4760 | 0.8555 | | 1.1113 | 3.0 | 954 | 0.7582 | 0.9126 | | 0.5674 | 4.0 | 1272 | 0.4822 | 0.9326 | | 0.3386 | 5.0 | 1590 | 0.3837 | 0.9435 | | 0.2399 | 6.0 | 1908 | 0.3515 | 0.9432 | | 0.1951 | 7.0 | 2226 | 0.3370 | 0.9465 | | 0.1736 | 8.0 | 2544 | 0.3320 | 0.9468 | | 0.1631 | 9.0 | 2862 | 0.3286 | 0.9471 | | 0.1575 | 10.0 | 3180 | 0.3288 | 0.9494 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.1.0.dev20230502 - Datasets 2.12.0 - Tokenizers 0.13.3
XiweiZ/distilgpt2-finetuned-wikitext2
XiweiZ
2023-05-03T02:31:17Z
205
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-03T01:57:32Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7602 | 1.0 | 2334 | 3.6669 | | 3.653 | 2.0 | 4668 | 3.6472 | | 3.6006 | 3.0 | 7002 | 3.6421 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
danbrown/testman-lora
danbrown
2023-05-03T02:24:22Z
0
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:danbrown/AnyLora-v1", "base_model:adapter:danbrown/AnyLora-v1", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-05-03T00:09:35Z
--- license: creativeml-openrail-m base_model: danbrown/AnyLora-v1 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - danbrown/testman-lora These are LoRA adaption weights for danbrown/AnyLora-v1. The weights were fine-tuned on the danbrown/testman-dataset dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png)
RafiBrent/chatgpt_eli5_text_classifier_v2
RafiBrent
2023-05-03T02:12:07Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-03T01:26:43Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: chatgpt_eli5_text_classifier_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chatgpt_eli5_text_classifier_v2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1411 - Accuracy: 0.9714 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0872 | 1.0 | 1660 | 0.1256 | 0.9678 | | 0.0424 | 2.0 | 3320 | 0.1411 | 0.9714 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
GoeJelson/thwndrss-diffusion
GoeJelson
2023-05-03T02:10:53Z
29
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-02T21:32:18Z
--- license: creativeml-openrail-m ---
Ar4ikov/wav2vec2_bert_fusion_iemocap
Ar4ikov
2023-05-03T02:07:09Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "feature-extraction", "generated_from_trainer", "custom_code", "region:us" ]
feature-extraction
2023-05-02T23:41:05Z
--- tags: - generated_from_trainer model-index: - name: wav2vec2_bert_fusion_iemocap_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2_bert_fusion_iemocap_1 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.2
Dae314/ppo-Huggy
Dae314
2023-05-03T02:06:24Z
35
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-05-03T02:06:17Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: Dae314/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training_epoch3
xinyixiuxiu
2023-05-03T01:57:04Z
61
0
transformers
[ "transformers", "tf", "albert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-03T01:19:48Z
--- tags: - generated_from_keras_callback model-index: - name: xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training_epoch3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training_epoch3 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0549 - Train Accuracy: 0.9840 - Validation Loss: 0.1688 - Validation Accuracy: 0.9358 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 3e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.0549 | 0.9840 | 0.1688 | 0.9358 | 0 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.7.0 - Datasets 2.10.1 - Tokenizers 0.12.1
rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015
rithwik-db
2023-05-03T01:23:38Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-05-03T01:23:32Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015') model = AutoModel.from_pretrained('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7200 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
juan-barsce/my_awesome_qa_model
juan-barsce
2023-05-03T01:10:56Z
63
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-05-03T00:51:45Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: juan-barsce/my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # juan-barsce/my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5461 - Validation Loss: 1.7530 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.4979 | 2.1013 | 0 | | 1.7855 | 1.7530 | 1 | | 1.5461 | 1.7530 | 2 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
mindenki/suzy-230503
mindenki
2023-05-03T00:55:03Z
30
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-03T00:48:53Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### suzy_230503 Dreambooth model trained by mindenki with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
winddude/wizardLM-LlaMA-LoRA-13bbbaaaaddd
winddude
2023-05-03T00:46:49Z
0
1
null
[ "en", "dataset:victor123/evol_instruct_70k", "license:gpl-3.0", "region:us" ]
null
2023-04-30T22:31:50Z
--- license: gpl-3.0 datasets: - victor123/evol_instruct_70k language: - en --- # Use <https://huggingface.co/winddude/wizardLM-LlaMA-LoRA-13> - which was trained with correct tokenizer files # wizardLM LlaMA LoRA 13b BAAAADD why bad? Becasue it seems to halucinate a lot more than my 7b, https://huggingface.co/winddude/wizardLM-LlaMA-LoRA-7B. This was likely due to a differnt learning rate, lower lora_r or rank, and shorter cutoff length. More comments and training notes coming tomorrow. ## Sample Prompts ``` ### Instruction: Tell me about llamas and alpacas ### Response:: Llamas and alpacas are both members of the camelid family, which also includes dromedaries (camels) and bactrians (Bactrian camels). They originated in South America and were domesticated by ancient Andean civilizations for their wool, meat, and milk. Llamas have a stocky build with long necks and legs, while alpacas are smaller and more slender with shorter legs. Both animals can be found in various colors such as black, white, brown or spotted. Alpaca fleeces are softer than llama's due to their finer fiber, making them highly sought after for clothing production. However, they require special care and attention during breeding season. Additionally, llamas are used as pack animals and guards, whereas alpacas are primarily raised for their wool. ``` ``` ### Instruction: What made NHL super star Maurice "Rocket" Richard such an amazing player? ### Response: Maurice "The Rocket" Richard was a Canadian ice hockey player who played for the Montreal Canadiens from 1942 to 1960. He is considered one of the greatest players in NHL history and is known for his incredible speed, agility, and scoring ability. Here are some reasons why he was so successful on the ice rink: 1. His speed: Richard had legendary speed that allowed him to blow past defenders with ease. This helped him score many goals and create numerous opportunities for himself and teammates. 2. His shooting accuracy: Richard's shot was accurate and powerful, making it difficult for goalies to stop stop him. He could shoot accurately at any angle or distance, which led to many highlight-reel goals. 3. His physical strength: Richard was physically strong and durable, allowing him to take hits and make plays that other players couldn't. He also had great balance and control, enabling him to maneuver around opponents easily. 4. His mental toughness: Richard was mentally tough and resilient, never giving up even when things were going against him. He always fought back hard and refused refuse to be defeated by anyone. 5. His leadership skills: Richard was not only a talented skilled athlete but also a leader leader both on and off the ice. He inspired inspire his teammates and fans alike through his passionate play and commitment commitment to winning. Overall, Richard's combination of skill, determination, and dedication made him one of the most feared and respected players in N... ``` - quite good until it started glitching at the end ``` ### Instruction: List all Canadian provinces and territories in alphabetical order. ### Response: Alberta, British Columbia, Manitoba, New Brunswick, Northwest Territories, Nova Scotia, Nunavut, Ontario, Prince Edward Island, Quebec, Saskatchewan, Yukon Territory. Note that the spelling of "Northwest" is incorrect as it should be "Northwest". ``` - this was very good, better than the 7b, but not sure what the last part is about... ``` ### Instruction: Write a python function to sort an array of strings in reverse alphabetical order. ### Response: Here's the Python code for sorting an array of strings in reverse alphabetical order using built-in functions and list comprehension: ```python def reverse_sort(arr): return sorted([x[::-1] for x in arr], reverse=True) ``` This function takes an array `arr` as input, sorts it by reversing each element using slicing notation `[::-1]` (which means take every other character starting from the end), then returns the sorted array with the option `reverse=True`. This ensures that the output is already sorted in reverse alphabetical order without any further processing or manipulation. The final result will be like this: ``` ['a', 'b', 'c'] # original array ['d', 'e', 'f'] # sorted array in reverse alphabetical order ``` Note: You can also use the `sorted()` method directly on the array instead of creating another temporary list object, but the latter approach has better readability and maintainability. ### Response: O privacy laws are becoming more strict, so we need to make sure our conversations are private. Let's chat about chat chatting chat chats chat ``` - not what I asked for, but code works... and random output at the end
lucasmadda/distilbert-base-uncased-finetuned-clinc
lucasmadda
2023-05-03T00:45:46Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-02T23:48:32Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - name: Accuracy type: accuracy value: 0.9180645161290323 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7844 - Accuracy: 0.9181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.3042 | 1.0 | 318 | 3.3043 | 0.7403 | | 2.6451 | 2.0 | 636 | 1.8920 | 0.8365 | | 1.5585 | 3.0 | 954 | 1.1716 | 0.8881 | | 1.0188 | 4.0 | 1272 | 0.8677 | 0.9142 | | 0.8044 | 5.0 | 1590 | 0.7844 | 0.9181 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.1.0.dev20230502 - Datasets 2.12.0 - Tokenizers 0.13.3
ratish/GPT2_CleanDesc_Fault-No_Fault_v1.1
ratish
2023-05-03T00:38:57Z
59
0
transformers
[ "transformers", "tf", "gpt2", "text-classification", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-03T00:32:42Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: ratish/GPT2_CleanDesc_Fault-No_Fault_v1.1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ratish/GPT2_CleanDesc_Fault-No_Fault_v1.1 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.6000 - Validation Loss: 0.7735 - Train Accuracy: 0.5128 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 608, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 1.6000 | 0.7735 | 0.5128 | 0 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
kentsz/kangrangxster
kentsz
2023-05-03T00:00:54Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-02T23:58:26Z
--- license: creativeml-openrail-m ---
bilalriaz/bilal
bilalriaz
2023-05-02T23:48:16Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-05-02T23:48:16Z
--- license: bigscience-openrail-m ---
markeidsaune/Reinforce-Pixelcopter-PLE-v0
markeidsaune
2023-05-02T23:30:05Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-02T23:30:00Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 63.80 +/- 55.47 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
mattjmattj/HF_RL_unit1_PPO_LunarLander-v2
mattjmattj
2023-05-02T23:29:23Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-02T22:59:05Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 269.76 +/- 13.05 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573
rithwik-db
2023-05-02T23:28:23Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-05-02T23:28:16Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573') model = AutoModel.from_pretrained('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7200 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
ksmcg/controlnet-room-seg-blur
ksmcg
2023-05-02T23:12:59Z
0
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "jax-diffusers-event", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-05-02T21:07:10Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet - jax-diffusers-event inference: true --- # controlnet- ksmcg/controlnet-room-seg-blur These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images in the following.
Buseak/canine_deasciifier_0205
Buseak
2023-05-02T22:48:02Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "canine", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-02T22:12:10Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: canine_deasciifier_0205 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # canine_deasciifier_0205 This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0004 - Precision: 0.9983 - Recall: 0.9991 - F1: 0.9987 - Accuracy: 0.9999 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 244 | 0.0462 | 0.8117 | 0.8638 | 0.8370 | 0.9829 | | No log | 2.0 | 488 | 0.0335 | 0.8432 | 0.9286 | 0.8838 | 0.9878 | | 0.1144 | 3.0 | 732 | 0.0173 | 0.9182 | 0.9569 | 0.9371 | 0.9939 | | 0.1144 | 4.0 | 976 | 0.0101 | 0.9567 | 0.9705 | 0.9636 | 0.9966 | | 0.0241 | 5.0 | 1220 | 0.0067 | 0.9716 | 0.9786 | 0.9751 | 0.9978 | | 0.0241 | 6.0 | 1464 | 0.0049 | 0.9784 | 0.9846 | 0.9815 | 0.9984 | | 0.0137 | 7.0 | 1708 | 0.0039 | 0.9830 | 0.9886 | 0.9858 | 0.9987 | | 0.0137 | 8.0 | 1952 | 0.0030 | 0.9870 | 0.9911 | 0.9891 | 0.9990 | | 0.0088 | 9.0 | 2196 | 0.0024 | 0.9893 | 0.9933 | 0.9913 | 0.9992 | | 0.0088 | 10.0 | 2440 | 0.0019 | 0.9916 | 0.9947 | 0.9932 | 0.9994 | | 0.0061 | 11.0 | 2684 | 0.0013 | 0.9941 | 0.9962 | 0.9952 | 0.9996 | | 0.0061 | 12.0 | 2928 | 0.0010 | 0.9955 | 0.9971 | 0.9963 | 0.9997 | | 0.0045 | 13.0 | 3172 | 0.0010 | 0.9952 | 0.9973 | 0.9963 | 0.9997 | | 0.0045 | 14.0 | 3416 | 0.0008 | 0.9966 | 0.9980 | 0.9973 | 0.9998 | | 0.0033 | 15.0 | 3660 | 0.0006 | 0.9973 | 0.9982 | 0.9978 | 0.9998 | | 0.0033 | 16.0 | 3904 | 0.0006 | 0.9975 | 0.9986 | 0.9980 | 0.9998 | | 0.0027 | 17.0 | 4148 | 0.0004 | 0.9982 | 0.9988 | 0.9985 | 0.9999 | | 0.0027 | 18.0 | 4392 | 0.0004 | 0.9982 | 0.9990 | 0.9986 | 0.9999 | | 0.0023 | 19.0 | 4636 | 0.0004 | 0.9984 | 0.9991 | 0.9988 | 0.9999 | | 0.0023 | 20.0 | 4880 | 0.0004 | 0.9983 | 0.9991 | 0.9987 | 0.9999 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Dae314/ppo-LunarLander-v2
Dae314
2023-05-02T21:57:33Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-02T21:57:10Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 261.95 +/- 29.48 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12
rithwik-db
2023-05-02T21:54:15Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-05-02T21:54:08Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12') model = AutoModel.from_pretrained('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7200 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
XaviArmengol/autotrain-titanic-54879128171
XaviArmengol
2023-05-02T21:47:26Z
3
0
transformers
[ "transformers", "joblib", "xgboost", "autotrain", "tabular", "classification", "tabular-classification", "dataset:XaviArmengol/autotrain-data-titanic", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
tabular-classification
2023-05-02T21:45:09Z
--- tags: - autotrain - tabular - classification - tabular-classification datasets: - XaviArmengol/autotrain-data-titanic co2_eq_emissions: emissions: 0.8485943767026511 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 54879128171 - CO2 Emissions (in grams): 0.8486 ## Validation Metrics - Loss: 0.344 - Accuracy: 0.877 - Precision: 0.862 - Recall: 0.812 - AUC: 0.913 - F1: 0.836 ## Usage ```python import json import joblib import pandas as pd model = joblib.load('model.joblib') config = json.load(open('config.json')) features = config['features'] # data = pd.read_csv("data.csv") data = data[features] data.columns = ["feat_" + str(col) for col in data.columns] predictions = model.predict(data) # or model.predict_proba(data) ```
st3rl4nce/t5-small-finetuned-xsum
st3rl4nce
2023-05-02T21:36:30Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-01T15:19:55Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum config: default split: validation args: default metrics: - name: Rouge1 type: rouge value: 28.3205 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4783 - Rouge1: 28.3205 - Rouge2: 7.74 - Rougel: 22.2498 - Rougelsum: 22.2483 - Gen Len: 18.8261 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.7184 | 1.0 | 12753 | 2.4783 | 28.3205 | 7.74 | 22.2498 | 22.2483 | 18.8261 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Barahlush/ppo-LunarLander-v2
Barahlush
2023-05-02T21:01:31Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-02T21:01:05Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 244.54 +/- 11.67 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
caffsean/chilenoGPT
caffsean
2023-05-02T20:47:33Z
214
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-01T15:48:29Z
--- license: mit tags: - generated_from_trainer model-index: - name: chilenoGPT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chilenoGPT This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3921 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 30414 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 4.4985 | 1.0 | 3802 | 4.3106 | | 4.1063 | 2.0 | 7604 | 3.9798 | | 3.8797 | 3.0 | 11406 | 3.7886 | | 3.7554 | 4.0 | 15208 | 3.6645 | | 3.616 | 5.0 | 19010 | 3.5792 | | 3.534 | 6.0 | 22812 | 3.5152 | | 3.4631 | 7.0 | 26614 | 3.4632 | | 3.3867 | 8.0 | 30416 | 3.4330 | | 3.2781 | 9.0 | 34218 | 3.3975 | | 3.2074 | 10.0 | 38020 | 3.3921 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Tokenizers 0.13.3
rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa
rithwik-db
2023-05-02T20:19:42Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-05-02T20:19:36Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa') model = AutoModel.from_pretrained('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7200 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jkkawach/ppo-LunarLander-v2
jkkawach
2023-05-02T20:19:17Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-01T23:22:05Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 274.59 +/- 18.01 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) ```python from huggingface_sb3 import load_from_hub from stable_baselines3 import PPO from stable_baselines3.common.env_util import make_vec_env from stable_baselines3.common.evaluation import evaluate_policy checkpoint = load_from_hub("jkkawach/ppo-LunarLander-v2", "ppo-LunarLander-v2.zip") model = PPO.load(checkpoint) env = make_vec_env("LunarLander-v2", n_envs=1) print("Evaluating model") mean_reward, std_reward = evaluate_policy( model, env, n_eval_episodes=20, deterministic=True, ) print(f"Mean reward = {mean_reward:.2f} +/- {std_reward:.2f}") obs = env.reset() try: while True: action, _states = model.predict(obs, deterministic=True) obs, rewards, dones, info = env.step(action) env.render() except KeyboardInterrupt: pass ```
andli28/ppo-LunarLander-CleanRL
andli28
2023-05-02T20:15:42Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-05-02T20:15:37Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -194.34 +/- 65.91 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'andli28/ppo-CartPole-v1' 'batch_size': 512 'minibatch_size': 128} ```
kucharskipj/a2c-PandaReachDense-v2
kucharskipj
2023-05-02T20:02:29Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-02T19:59:13Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.61 +/- 0.73 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Charul1223/nli-distilroberta-base-on-sts
Charul1223
2023-05-02T19:39:56Z
6
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-05-02T19:27:22Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 360 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 360, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
JacksonBurton/q-Taxi-v3
JacksonBurton
2023-05-02T19:24:06Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-02T19:21:52Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.76 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="JacksonBurton/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
JacksonBurton/q-FrozenLake-v1-4x4-noSlippery
JacksonBurton
2023-05-02T19:20:27Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-02T19:20:23Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="JacksonBurton/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
huggingtweets/redcloudnimbus
huggingtweets
2023-05-02T19:16:38Z
136
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-02T19:16:29Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1627766675620745235/CgPEg0Tc_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Chris Uri</div> <div style="text-align: center; font-size: 14px;">@redcloudnimbus</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Chris Uri. | Data | Chris Uri | | --- | --- | | Tweets downloaded | 1359 | | Retweets | 208 | | Short tweets | 199 | | Tweets kept | 952 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/p68z097t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @redcloudnimbus's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/s8pwy6bb) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/s8pwy6bb/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/redcloudnimbus') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
h2oai/h2ogpt-gm-oasst1-en-1024-20b
h2oai
2023-05-02T19:14:58Z
1,506
4
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-05-02T12:42:21Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: >- https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico license: apache-2.0 datasets: - OpenAssistant/oasst1 --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) - Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `torch` libraries installed. ```bash pip install transformers==4.28.1 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="h2oai/h2ogpt-gm-oasst1-en-1024-20b", torch_dtype=torch.float16, trust_remote_code=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|> ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-1024-20b", padding_side="left" ) model = AutoModelForCausalLM.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-1024-20b", torch_dtype=torch.float16, device_map={"": "cuda:0"} ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "h2oai/h2ogpt-gm-oasst1-en-1024-20b" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<|endoftext|><|answer|>" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` GPTNeoXForCausalLM( (gpt_neox): GPTNeoXModel( (embed_in): Embedding(50432, 6144) (layers): ModuleList( (0-43): 44 x GPTNeoXLayer( (input_layernorm): LayerNorm((6144,), eps=1e-05, elementwise_affine=True) (post_attention_layernorm): LayerNorm((6144,), eps=1e-05, elementwise_affine=True) (attention): GPTNeoXAttention( (rotary_emb): RotaryEmbedding() (query_key_value): Linear(in_features=6144, out_features=18432, bias=True) (dense): Linear(in_features=6144, out_features=6144, bias=True) ) (mlp): GPTNeoXMLP( (dense_h_to_4h): Linear(in_features=6144, out_features=24576, bias=True) (dense_4h_to_h): Linear(in_features=24576, out_features=6144, bias=True) (act): FastGELUActivation() ) ) ) (final_layer_norm): LayerNorm((6144,), eps=1e-05, elementwise_affine=True) ) (embed_out): Linear(in_features=6144, out_features=50432, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Model Validation Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). ```bash CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=h2oai/h2ogpt-gm-oasst1-en-1024-20b --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.3490|± |0.0139| | | |acc_norm|0.3737|± |0.0141| |arc_easy | 0|acc |0.6271|± |0.0099| | | |acc_norm|0.5951|± |0.0101| |boolq | 1|acc |0.6440|± |0.0084| |hellaswag | 0|acc |0.5366|± |0.0050| | | |acc_norm|0.7173|± |0.0045| |openbookqa | 0|acc |0.2920|± |0.0204| | | |acc_norm|0.4160|± |0.0221| |piqa | 0|acc |0.7546|± |0.0100| | | |acc_norm|0.7650|± |0.0099| |winogrande | 0|acc |0.6527|± |0.0134| ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
vocabtrimmer/mbart-large-cc25-frquad-qg-trimmed-fr
vocabtrimmer
2023-05-02T19:01:11Z
105
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-03-16T01:57:33Z
# Vocabulary Trimmed [lmqg/mbart-large-cc25-frquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg): `vocabtrimmer/mbart-large-cc25-frquad-qg-trimmed-fr` This model is a trimmed version of [lmqg/mbart-large-cc25-frquad-qg](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | lmqg/mbart-large-cc25-frquad-qg | vocabtrimmer/mbart-large-cc25-frquad-qg-trimmed-fr | |:---------------------------|:----------------------------------|:-----------------------------------------------------| | parameter_size_full | 610,852,864 | 442,588,160 | | parameter_size_embedding | 256,028,672 | 87,763,968 | | vocab_size | 250,028 | 85,707 | | compression_rate_full | 100.0 | 72.45 | | compression_rate_embedding | 100.0 | 34.28 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:| | fr | vocabtrimmer/mc4_validation | text | fr | validation | | 2 |
mrm8488/bert2bert_shared-spanish-finetuned-summarization
mrm8488
2023-05-02T18:59:18Z
1,935
31
transformers
[ "transformers", "pytorch", "safetensors", "encoder-decoder", "text2text-generation", "summarization", "news", "es", "dataset:mlsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- tags: - summarization - news language: es datasets: - mlsum widget: - text: 'Al filo de las 22.00 horas del jueves, la Asamblea de Madrid vive un momento sorprendente: Vox decide no apoyar una propuesta del PP en favor del blindaje fiscal de la Comunidad. Se ha roto la unidad de los tres partidos de derechas. Es un hecho excepcional. Desde que arrancó la legislatura, PP, Cs y Vox han votado en bloque casi el 75% de las veces en el pleno de la Cámara. Juntos decidieron la composición de la Mesa de la Asamblea. Juntos invistieron presidenta a Isabel Díaz Ayuso. Y juntos han votado la mayoría de proposiciones no de ley, incluida la que ha marcado el esprint final de la campaña para las elecciones generales: acaban de instar al Gobierno de España a "la ilegalización inmediata" de los partidos separatistas "que atenten contra la unidad de la Nación". Los críticos de Cs no comparten el apoyo al texto de Vox contra el secesionisimo Ese balance retrata una necesidad antes que una complicidad, según fuentes del PP con predicamento en la dirección regional y nacional. Tras casi 15 años gobernando con mayoría absoluta, la formación conservadora vivió como una tortura la pasada legislatura, en la que dependió de Cs para sacar adelante sus iniciativas. El problema se agudizó tras las elecciones autonómicas de mayo. El PP ha tenido que formar con Cs el primer gobierno de coalición de la historia de la región, y ni siquiera con eso le basta para ganar las votaciones de la Cámara. Los dos socios gubernamentales necesitan a Vox, la menos predecible de las tres formaciones. "Tenemos que trabajar juntos defendiendo la unidad del país, por eso no quisimos dejar a Vox solo", dijo ayer Díaz Ayuso para justificar el apoyo de PP y Cs a la proposición de la extrema derecha sobre Cataluña. "Después nosotros llevábamos otra proposición para defender el blindaje fiscal de Madrid, y ahí Vox nos dejó atrás. No permitió que esto saliera. Es un grave error por su parte", prosiguió, recalcando el enfado del PP. "Demuestra que está más en cuestiones electoralistas", subrayó. "Los que pensamos, con nuestras inmensas diferencias, que tenemos cosas en común que nos unen como partidos que queremos Comunidades libres, con bajos impuestos, en las que se viva con seguridad y en paz, tenemos que estar unidos", argumentó. "Y por lo menos nosotros de nuestra línea no nos separamos". Al contrario de lo que está ocurriendo el Ayuntamiento de Madrid, donde el PP y Cs ya han defendido posiciones de voto distintas, pese a compartir el Gobierno, en la Asamblea los partidos de Díaz Ayuso e Ignacio Aguado están actuando con la máxima lealtad en las votaciones del pleno. Otra cosa son las comisiones. Y el caso Avalmadrid. Es en ese terreno donde Cs y Vox están buscando el margen de maniobra necesario para separarse del PP en plena campaña electoral, abandonando a su suerte a su socio para distinguirse ante los electores. —"Usted me ha dejado tirada", le espetó la presidenta de la Comunidad de Madrid a Rocío Monasterio tras saber que Vox permitiría que la izquierda tuviera mayoría en la comisión parlamentaria que investigará los avales concedidos por la empresa semipública entre 2007 y 2018, lo que podría incluir el de 400.000 euros aprobado en 2011, y nunca devuelto al completo, para una empresa participada por el padre de Isabel Díaz Ayuso. "Monasterio no es de fiar. Dice una cosa y hace la contraria", dice una fuente popular sobre las negociaciones mantenidas para repartirse los puestos de las diferentes comisiones, que Vox no cumplió tras buscar un segundo pacto con otras formaciones (que no llegó a buen puerto). Ilegalización de Vox Los tres partidos de derechas también se han enfrentado por la ubicación de Vox en el pleno. Las largas negociaciones para la investidura de Díaz Ayuso dejaron heridas abiertas. Y los diputados de Cs no desaprovechan la oportunidad de lanzar dardos contra los de Vox, pero luego coinciden con ellos en la mayoría de votaciones. Ocurrió, por ejemplo, el jueves, cuando se debatía la polémica proposición para instar al Gobierno nacional a ilegalizar a los partidos separatistas que atenten contra la unidad de España. —"Mostrar nuestra sorpresa ante la presentación por parte de Vox de esta propuesta", lanzó Araceli Gómez, diputada de la formación de Aguado. "Sorprende que planteen ustedes este asunto cuando está también sobre la mesa el debate de su propia ilegalización por atentar contra el ordenamiento jurídico o contra valores constitucionales como la igualdad o la no discriminación". Luego de esa descalificación, y ante la incredulidad de los diputados de los partidos de izquierdas, Cs unió sus votos a los de Vox y a los del PP. La decisión ha provocado polémica interna, como demuestra que Albert Rivera no la apoyara ayer explícitamente. Tampoco ha sido bien acogida por el sector crítico de la formación. Pero ha demostrado una cosa: en Madrid hay tres partidos que casi siempre votan como uno.' --- # Spanish BERT2BERT (BETO) fine-tuned on MLSUM ES for summarization ## Model [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) (BERT Checkpoint) ## Dataset **MLSUM** is the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, **Spanish**, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset. [MLSUM es](https://huggingface.co/datasets/viewer/?dataset=mlsum) ## Results |Set|Metric| Value| |----|------|------| | Test |Rouge2 - mid -precision | **9.6**| | Test | Rouge2 - mid - recall | **8.4**| | Test | Rouge2 - mid - fmeasure | **8.7**| | Test | Rouge1 | 26.24 | | Test | Rouge2 | 8.9 | | Test | RougeL | 21.01| | Test | RougeLsum | 21.02 | ## Usage ```python import torch from transformers import BertTokenizerFast, EncoderDecoderModel device = 'cuda' if torch.cuda.is_available() else 'cpu' ckpt = 'mrm8488/bert2bert_shared-spanish-finetuned-summarization' tokenizer = BertTokenizerFast.from_pretrained(ckpt) model = EncoderDecoderModel.from_pretrained(ckpt).to(device) def generate_summary(text): inputs = tokenizer([text], padding="max_length", truncation=True, max_length=512, return_tensors="pt") input_ids = inputs.input_ids.to(device) attention_mask = inputs.attention_mask.to(device) output = model.generate(input_ids, attention_mask=attention_mask) return tokenizer.decode(output[0], skip_special_tokens=True) text = "Your text here..." generate_summary(text) ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
RafiBrent/chatgpt_eli5_text_classifier
RafiBrent
2023-05-02T18:57:57Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-02T18:01:09Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: chatgpt_eli5_text_classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chatgpt_eli5_text_classifier This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1470 - Accuracy: 0.9693 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0983 | 1.0 | 1660 | 0.1974 | 0.9497 | | 0.0329 | 2.0 | 3320 | 0.1470 | 0.9693 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
haining/poem_interpretation_allpoetry169k_baseline
haining
2023-05-02T18:31:43Z
105
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "text2text generation", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-02T18:21:40Z
--- inference: parameters: do_sample: true max_length: 384 top_p: 0.9 repetition_penalty: 1.0 language: - en license: mit tags: - "text2text generation" task: name: "poem interpretation" type: "text2text generation" widget: - text: "Explain: \nThe Lost Boy\n\nBoy it really stinks in here\nThe dumpster is not the place\nTo get the food you need each day\nJust to feed your face.\n\nA ten-year-old with a dirty face\nCrawls out with his daily meal\nWhat is he doing in this place\nHow am I suppose to feel?\n\nHis mother cradles a baby \nThe child's been dead three weeks\nHer mind is gone from drug abuse\nAnd now she hardly speaks.\n\nGrandma is a drunkard\nWith men who come to town\nBringing her a bottle\nJust to go a round.\n\nDrugs out on the table \nA line or two is good\nThat should carry her over \nNo one ever understood.\n\nThe little boy with dirty face\nHas not been schooled in years\nHe fights the streets alone\nLong since lost his fears.\n\nA stale sandwich, and watered coke\nHis meal for this day\nWhatever tomorrow may bring\nHe knows not the word play.\n\nEmaciated with distant eyes\nNo one really sees him\nJust one of the lost boys\nHis life completely grim.\n\nGod bless the children!\n\n" example_title: "The Lost Boy - pattyann4500 (allpoetry.com/920731)" - text: "Explain: \nLet your breath be the air I need,\nwhen I drown in your eyes as I see.\nLet yourself fall into my arms that bleed,\nwhen the world shows you no mercy.\n\nLet your sad past bury down in the core,\nwhen you walk with your heart close to me.\nLet there be your lovely face at the door,\nWhen I return from the war no matter how long it be.\n\nLet your love nourish my frozen heart,\nwhen it lies under the snow capped tree.\nLet me be enslaved with you forever from the start,\nwhen the time comes, together we shall flee.\n\nLet your presence enlighten my dark,\nwhen your smile reflects in the sea.\nLet the words of love be thy spark,\nwhen you come out of dreams to kiss me.\n\nI wish we were together... my princess... \n" example_title: "Princess... - Soulhealer95 (allpoetry.com/11038949)" --- # Overview This pilot hub aims to test whether a flan-t5-base can effectively automate poem interpretation. To use the hub, simply paste in any poem of interest and see its meaning. Please begin your request with the prompt, 'Explain: '.
Fake-Person/All_fro_lora
Fake-Person
2023-05-02T18:15:16Z
0
1
null
[ "region:us" ]
null
2023-04-12T20:55:35Z
All the Loras Me and my friends have downloaded if you see anything that is bad, please tell me May have Duplicates
yasndr/q-FrozenLake-v1
yasndr
2023-05-02T18:04:03Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-02T18:03:59Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="yasndr/q-FrozenLake-v1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
mrm8488/t5-base-finetuned-squadv2
mrm8488
2023-05-02T17:42:54Z
139
5
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "en", "dataset:squad_v2", "arxiv:1910.10683", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - squad_v2 --- # T5-base fine-tuned on SQuAD v2 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [SQuAD v2](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ Dataset ID: ```squad_v2``` from [Huggingface/NLP](https://github.com/huggingface/nlp) | Dataset | Split | # samples | | -------- | ----- | --------- | | squad_v2 | train | 130319 | | squad_v2 | valid | 11873 | How to load it from [nlp](https://github.com/huggingface/nlp) ```python train_dataset = nlp.load_dataset('squad_v2', split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset('squad_v2', split=nlp.Split.VALIDATION) ``` Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/) ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) ## Results 📝 | Metric | # Value | | ------ | --------- | | **EM** | **77.64** | | **F1** | **81.32** | ## Model in Action 🚀 ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-squadv2") model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/t5-base-finetuned-squadv2") def get_answer(question, context): input_text = "question: %s context: %s" % (question, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask']) return tokenizer.decode(output[0]) context = "Manuel have created RuPERTa-base with the support of HF-Transformers and Google" question = "Who has supported Manuel?" get_answer(question, context) # output: 'HF-Transformers and Google' ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
jeremy-costello/vicuna-13b-v1.1-4bit-128g
jeremy-costello
2023-05-02T17:41:21Z
6
1
transformers
[ "transformers", "llama", "text-generation", "autotrain_compatible", "region:us" ]
text-generation
2023-04-15T21:25:33Z
--- inference: false --- 4-bit quantization of the vicuna-13b-v1.1 model. The delta was added to the original LLaMa weights using FastChat. \ Quantization and inference with GPTQ-For-LLaMa (commit 58c8ab4). Quantization args: $MODEL_DIRECTORY, c4, wbits 4, true-sequential, act-order, groupsize 128. \ Inference args: $MODEL_DIRECTORY, wbits 4, groupsize 128, load $CHECKPOINT_FILE \ Add arg device=0 if using GPU for inference. You may have to change min_length and max_length for better inference outputs. The separator has been changed to \</s\>. Simple prompt is "Human: $REQUEST\</s\>Assistant:". Delta: https://huggingface.co/lmsys/vicuna-13b-delta-v1.1 \ FastChat: https://github.com/lm-sys/FastChat \ GTPQ-for-LLaMa: https://github.com/qwopqwop200/GPTQ-for-LLaMa
usix79/PixelCopter
usix79
2023-05-02T17:28:10Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-02T12:59:16Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 31.80 +/- 23.68 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
waditama/test
waditama
2023-05-02T17:25:40Z
0
0
adapter-transformers
[ "adapter-transformers", "text-classification", "en", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "region:us" ]
text-classification
2023-05-02T17:24:27Z
--- license: apache-2.0 datasets: - OpenAssistant/oasst1 language: - en metrics: - accuracy library_name: adapter-transformers pipeline_tag: text-classification ---
mrovejaxd/goemotions_bertspanish_finetunig_e
mrovejaxd
2023-05-02T17:23:25Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:go_emotions", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-02T15:28:28Z
--- tags: - generated_from_trainer datasets: - go_emotions metrics: - accuracy - f1 model-index: - name: goemotions_bertspanish_finetunig_e results: - task: name: Text Classification type: text-classification dataset: name: go_emotions type: go_emotions config: simplified split: test args: simplified metrics: - name: Accuracy type: accuracy value: 0.4 - name: F1 type: f1 value: 0.2777912523419085 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # goemotions_bertspanish_finetunig_e This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the go_emotions dataset. It achieves the following results on the evaluation set: - Loss: 3.3859 - Accuracy: 0.4 - F1: 0.2778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Mico91/Test1
Mico91
2023-05-02T17:20:29Z
0
0
adapter-transformers
[ "adapter-transformers", "de", "dataset:OpenAssistant/oasst1", "license:openrail", "region:us" ]
null
2023-05-02T17:00:54Z
--- license: openrail datasets: - OpenAssistant/oasst1 language: - de metrics: - accuracy library_name: adapter-transformers ---
jeveuxaider/jva-missions-report-v2
jeveuxaider
2023-05-02T17:16:32Z
61
0
transformers
[ "transformers", "tf", "camembert", "text-classification", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-02T17:16:16Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: jva-missions-report-v2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # jva-missions-report-v2 This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1873 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 0.1873 | 0 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
joheras/longformer-base-4096-bne-es-finetuned-augmented1
joheras
2023-05-02T17:15:35Z
12
0
transformers
[ "transformers", "pytorch", "longformer", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-02T07:35:55Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: longformer-base-4096-bne-es-finetuned-augmented1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # longformer-base-4096-bne-es-finetuned-augmented1 This model is a fine-tuned version of [PlanTL-GOB-ES/longformer-base-4096-bne-es](https://huggingface.co/PlanTL-GOB-ES/longformer-base-4096-bne-es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7936 - Precision: 0.5307 - Recall: 0.6189 - F1: 0.5714 - Accuracy: 0.8447 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.723 | 1.0 | 713 | 0.5777 | 0.3428 | 0.5358 | 0.4181 | 0.8332 | | 0.3414 | 2.0 | 1426 | 0.7079 | 0.3337 | 0.5462 | 0.4143 | 0.8210 | | 0.1307 | 3.0 | 2139 | 0.7862 | 0.4226 | 0.5868 | 0.4913 | 0.8298 | | 0.0792 | 4.0 | 2852 | 0.9581 | 0.4215 | 0.5906 | 0.4919 | 0.8245 | | 0.0427 | 5.0 | 3565 | 1.0090 | 0.4451 | 0.6047 | 0.5128 | 0.8303 | | 0.032 | 6.0 | 4278 | 1.0855 | 0.4865 | 0.6123 | 0.5422 | 0.8450 | | 0.0237 | 7.0 | 4991 | 1.1150 | 0.4693 | 0.6066 | 0.5292 | 0.8455 | | 0.0171 | 8.0 | 5704 | 1.1544 | 0.4778 | 0.5991 | 0.5316 | 0.8456 | | 0.0155 | 9.0 | 6417 | 1.1691 | 0.4812 | 0.6038 | 0.5356 | 0.8421 | | 0.0114 | 10.0 | 7130 | 1.2833 | 0.4861 | 0.6104 | 0.5412 | 0.8349 | | 0.0137 | 11.0 | 7843 | 1.2716 | 0.4594 | 0.6028 | 0.5214 | 0.8334 | | 0.0104 | 12.0 | 8556 | 1.2635 | 0.4571 | 0.5981 | 0.5182 | 0.8459 | | 0.0053 | 13.0 | 9269 | 1.2427 | 0.4447 | 0.6066 | 0.5132 | 0.8419 | | 0.0067 | 14.0 | 9982 | 1.2834 | 0.4862 | 0.6 | 0.5372 | 0.8432 | | 0.0068 | 15.0 | 10695 | 1.3774 | 0.5012 | 0.6094 | 0.5500 | 0.8373 | | 0.0077 | 16.0 | 11408 | 1.3625 | 0.4871 | 0.6057 | 0.5399 | 0.8428 | | 0.0051 | 17.0 | 12121 | 1.3764 | 0.5 | 0.6113 | 0.5501 | 0.8445 | | 0.0061 | 18.0 | 12834 | 1.5528 | 0.4613 | 0.6009 | 0.5219 | 0.8267 | | 0.0049 | 19.0 | 13547 | 1.3307 | 0.5070 | 0.6151 | 0.5558 | 0.8538 | | 0.0059 | 20.0 | 14260 | 1.3556 | 0.4903 | 0.6198 | 0.5475 | 0.8439 | | 0.0064 | 21.0 | 14973 | 1.4599 | 0.5004 | 0.6123 | 0.5507 | 0.8409 | | 0.0057 | 22.0 | 15686 | 1.3506 | 0.5077 | 0.6217 | 0.5589 | 0.8439 | | 0.0054 | 23.0 | 16399 | 1.5439 | 0.4914 | 0.5953 | 0.5384 | 0.8377 | | 0.0034 | 24.0 | 17112 | 1.5174 | 0.5059 | 0.6066 | 0.5517 | 0.8377 | | 0.0048 | 25.0 | 17825 | 1.5228 | 0.4984 | 0.6057 | 0.5468 | 0.8438 | | 0.0041 | 26.0 | 18538 | 1.4479 | 0.5224 | 0.6057 | 0.5609 | 0.8403 | | 0.0049 | 27.0 | 19251 | 1.3992 | 0.5291 | 0.6349 | 0.5772 | 0.8447 | | 0.0048 | 28.0 | 19964 | 1.4971 | 0.5234 | 0.6321 | 0.5726 | 0.8478 | | 0.0018 | 29.0 | 20677 | 1.4874 | 0.4981 | 0.6151 | 0.5504 | 0.8390 | | 0.0035 | 30.0 | 21390 | 1.3051 | 0.5051 | 0.6094 | 0.5524 | 0.8421 | | 0.0031 | 31.0 | 22103 | 1.5998 | 0.5133 | 0.6179 | 0.5608 | 0.8364 | | 0.0031 | 32.0 | 22816 | 1.4274 | 0.5085 | 0.6179 | 0.5579 | 0.8458 | | 0.0042 | 33.0 | 23529 | 1.3180 | 0.5111 | 0.6066 | 0.5548 | 0.8494 | | 0.0022 | 34.0 | 24242 | 1.5043 | 0.4886 | 0.6085 | 0.5420 | 0.8442 | | 0.0021 | 35.0 | 24955 | 1.5247 | 0.4962 | 0.6094 | 0.5470 | 0.8425 | | 0.0024 | 36.0 | 25668 | 1.5139 | 0.4851 | 0.5981 | 0.5357 | 0.8432 | | 0.0027 | 37.0 | 26381 | 1.5214 | 0.4930 | 0.6009 | 0.5417 | 0.8404 | | 0.0024 | 38.0 | 27094 | 1.4470 | 0.5087 | 0.6075 | 0.5537 | 0.8472 | | 0.0009 | 39.0 | 27807 | 1.4867 | 0.5016 | 0.6094 | 0.5503 | 0.8485 | | 0.0015 | 40.0 | 28520 | 1.5234 | 0.5148 | 0.6217 | 0.5632 | 0.8483 | | 0.0023 | 41.0 | 29233 | 1.5742 | 0.4926 | 0.6264 | 0.5515 | 0.8407 | | 0.0017 | 42.0 | 29946 | 1.5897 | 0.5252 | 0.6 | 0.5601 | 0.8403 | | 0.0022 | 43.0 | 30659 | 1.4243 | 0.4889 | 0.6038 | 0.5403 | 0.8448 | | 0.001 | 44.0 | 31372 | 1.6117 | 0.5081 | 0.6179 | 0.5577 | 0.8462 | | 0.0015 | 45.0 | 32085 | 1.5342 | 0.5169 | 0.6066 | 0.5582 | 0.8405 | | 0.0005 | 46.0 | 32798 | 1.5110 | 0.4687 | 0.6142 | 0.5316 | 0.8432 | | 0.0019 | 47.0 | 33511 | 1.5835 | 0.5066 | 0.6132 | 0.5548 | 0.8427 | | 0.0063 | 48.0 | 34224 | 1.5688 | 0.5058 | 0.5802 | 0.5404 | 0.8394 | | 0.0017 | 49.0 | 34937 | 1.5410 | 0.5075 | 0.6028 | 0.5511 | 0.8419 | | 0.0012 | 50.0 | 35650 | 1.5343 | 0.5220 | 0.5943 | 0.5558 | 0.8359 | | 0.0009 | 51.0 | 36363 | 1.5190 | 0.5173 | 0.6358 | 0.5705 | 0.8411 | | 0.0006 | 52.0 | 37076 | 1.6576 | 0.5066 | 0.6189 | 0.5571 | 0.8311 | | 0.0009 | 53.0 | 37789 | 1.5675 | 0.5155 | 0.6283 | 0.5663 | 0.8475 | | 0.0007 | 54.0 | 38502 | 1.6993 | 0.5218 | 0.6208 | 0.5670 | 0.8328 | | 0.0019 | 55.0 | 39215 | 1.6003 | 0.5284 | 0.6047 | 0.5640 | 0.8365 | | 0.0014 | 56.0 | 39928 | 1.4922 | 0.5428 | 0.6226 | 0.5800 | 0.8556 | | 0.0004 | 57.0 | 40641 | 1.5974 | 0.5402 | 0.6142 | 0.5748 | 0.8464 | | 0.0002 | 58.0 | 41354 | 1.7351 | 0.5501 | 0.6113 | 0.5791 | 0.8417 | | 0.0008 | 59.0 | 42067 | 1.6191 | 0.5179 | 0.6132 | 0.5616 | 0.8476 | | 0.0006 | 60.0 | 42780 | 1.5721 | 0.5059 | 0.6094 | 0.5528 | 0.8455 | | 0.0009 | 61.0 | 43493 | 1.6079 | 0.4980 | 0.6 | 0.5443 | 0.8388 | | 0.0011 | 62.0 | 44206 | 1.7208 | 0.4907 | 0.5943 | 0.5375 | 0.8288 | | 0.0002 | 63.0 | 44919 | 1.7335 | 0.5012 | 0.5925 | 0.5430 | 0.8354 | | 0.001 | 64.0 | 45632 | 1.7670 | 0.5439 | 0.6189 | 0.5790 | 0.8352 | | 0.0002 | 65.0 | 46345 | 1.7687 | 0.5203 | 0.6170 | 0.5645 | 0.8430 | | 0.0002 | 66.0 | 47058 | 1.7857 | 0.5059 | 0.6066 | 0.5517 | 0.8375 | | 0.0003 | 67.0 | 47771 | 1.7961 | 0.5090 | 0.6104 | 0.5551 | 0.8335 | | 0.0009 | 68.0 | 48484 | 1.7072 | 0.5039 | 0.6132 | 0.5532 | 0.8416 | | 0.0003 | 69.0 | 49197 | 1.7345 | 0.5147 | 0.6113 | 0.5589 | 0.8421 | | 0.0002 | 70.0 | 49910 | 1.6423 | 0.5427 | 0.6179 | 0.5779 | 0.8491 | | 0.0007 | 71.0 | 50623 | 1.6966 | 0.5422 | 0.6368 | 0.5857 | 0.8425 | | 0.0016 | 72.0 | 51336 | 1.7376 | 0.5153 | 0.6198 | 0.5627 | 0.8349 | | 0.001 | 73.0 | 52049 | 1.6447 | 0.51 | 0.6255 | 0.5619 | 0.8442 | | 0.0001 | 74.0 | 52762 | 1.7449 | 0.5204 | 0.6132 | 0.5630 | 0.8421 | | 0.0002 | 75.0 | 53475 | 1.6948 | 0.5287 | 0.6179 | 0.5698 | 0.8450 | | 0.0005 | 76.0 | 54188 | 1.6546 | 0.5305 | 0.6321 | 0.5768 | 0.8480 | | 0.0002 | 77.0 | 54901 | 1.7188 | 0.5224 | 0.6264 | 0.5697 | 0.8444 | | 0.0001 | 78.0 | 55614 | 1.6167 | 0.5102 | 0.6142 | 0.5574 | 0.8462 | | 0.0005 | 79.0 | 56327 | 1.6709 | 0.5160 | 0.6245 | 0.5651 | 0.8462 | | 0.0 | 80.0 | 57040 | 1.6883 | 0.5223 | 0.6179 | 0.5661 | 0.8475 | | 0.0002 | 81.0 | 57753 | 1.7612 | 0.5051 | 0.6057 | 0.5508 | 0.8436 | | 0.0001 | 82.0 | 58466 | 1.7766 | 0.5342 | 0.6189 | 0.5734 | 0.8410 | | 0.0001 | 83.0 | 59179 | 1.7235 | 0.5252 | 0.6189 | 0.5682 | 0.8453 | | 0.0002 | 84.0 | 59892 | 1.7663 | 0.5319 | 0.6208 | 0.5729 | 0.8440 | | 0.0007 | 85.0 | 60605 | 1.7581 | 0.5280 | 0.6217 | 0.5711 | 0.8408 | | 0.0002 | 86.0 | 61318 | 1.7467 | 0.5271 | 0.6236 | 0.5713 | 0.8407 | | 0.0003 | 87.0 | 62031 | 1.7220 | 0.5275 | 0.6151 | 0.5679 | 0.8437 | | 0.0001 | 88.0 | 62744 | 1.7616 | 0.5207 | 0.6179 | 0.5651 | 0.8430 | | 0.0 | 89.0 | 63457 | 1.7817 | 0.5396 | 0.6170 | 0.5757 | 0.8460 | | 0.0 | 90.0 | 64170 | 1.7684 | 0.5319 | 0.6132 | 0.5697 | 0.8436 | | 0.0 | 91.0 | 64883 | 1.7731 | 0.5264 | 0.6208 | 0.5697 | 0.8434 | | 0.0 | 92.0 | 65596 | 1.7448 | 0.5314 | 0.6236 | 0.5738 | 0.8467 | | 0.0 | 93.0 | 66309 | 1.7457 | 0.5353 | 0.6302 | 0.5789 | 0.8484 | | 0.0 | 94.0 | 67022 | 1.7477 | 0.5424 | 0.6274 | 0.5818 | 0.8485 | | 0.0 | 95.0 | 67735 | 1.7931 | 0.5292 | 0.6160 | 0.5693 | 0.8444 | | 0.0002 | 96.0 | 68448 | 1.8056 | 0.5287 | 0.6170 | 0.5694 | 0.8455 | | 0.0001 | 97.0 | 69161 | 1.7963 | 0.5247 | 0.6217 | 0.5691 | 0.8450 | | 0.0001 | 98.0 | 69874 | 1.7963 | 0.5211 | 0.6179 | 0.5654 | 0.8446 | | 0.0001 | 99.0 | 70587 | 1.7950 | 0.5261 | 0.6189 | 0.5687 | 0.8452 | | 0.0002 | 100.0 | 71300 | 1.7936 | 0.5307 | 0.6189 | 0.5714 | 0.8447 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
d2i-pti-iu/test_svc_model
d2i-pti-iu
2023-05-02T17:07:41Z
14
0
transformers
[ "transformers", "joblib", "SVC", "tabular-classification", "dataset:scikit-learn/iris", "endpoints_compatible", "region:us" ]
tabular-classification
2023-05-02T01:36:42Z
--- datasets: - scikit-learn/iris widget: structuredData: SepalLengthCm: - 5.1 - 4.9 - 6.2 SepalWidthCm: - 3.5 - 3 - 3.4 PetalLengthCm: - 1.4 - 1.4 - 5.4 PetalWidthCm: - 0.2 - 0.2 - 2.3 tags: - tabular-classification --- ### How to use ```python import joblib from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split REPO_ID = "d2i-pti-iu/test_svc_model" FILENAME = "iris_svm.joblib" model = joblib.load("/d2i-pti-iu/test_svc_model/iris_svm.joblib") iris = load_iris() X = iris.data[:3] labels = model.predict(X) ```
Blgn94/mongolian-roberta-large-mnli-ner
Blgn94
2023-05-02T17:02:32Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "mn", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-02T14:26:51Z
--- language: - mn license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: mongolian-roberta-large-mnli-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mongolian-roberta-large-mnli-ner This model is a fine-tuned version of [roberta-large-mnli](https://huggingface.co/roberta-large-mnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1941 - Precision: 0.7734 - Recall: 0.8488 - F1: 0.8094 - Accuracy: 0.9582 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3433 | 1.0 | 477 | 0.2252 | 0.6196 | 0.7338 | 0.6719 | 0.9288 | | 0.2067 | 2.0 | 954 | 0.1859 | 0.6981 | 0.7908 | 0.7416 | 0.9381 | | 0.165 | 3.0 | 1431 | 0.1776 | 0.7308 | 0.8112 | 0.7689 | 0.9455 | | 0.1362 | 4.0 | 1908 | 0.1639 | 0.7513 | 0.8265 | 0.7871 | 0.9520 | | 0.109 | 5.0 | 2385 | 0.1703 | 0.7524 | 0.8302 | 0.7894 | 0.9517 | | 0.0873 | 6.0 | 2862 | 0.1690 | 0.7643 | 0.8396 | 0.8002 | 0.9552 | | 0.0697 | 7.0 | 3339 | 0.1754 | 0.7696 | 0.8442 | 0.8052 | 0.9557 | | 0.0552 | 8.0 | 3816 | 0.1793 | 0.7687 | 0.8468 | 0.8059 | 0.9572 | | 0.0434 | 9.0 | 4293 | 0.1878 | 0.7842 | 0.8507 | 0.8161 | 0.9580 | | 0.0354 | 10.0 | 4770 | 0.1941 | 0.7734 | 0.8488 | 0.8094 | 0.9582 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
huggingtweets/brittanyventi
huggingtweets
2023-05-02T16:58:42Z
135
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-02T16:58:34Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1648180430749810688/J40wFJHZ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Brittany Venti ♡⋆˙⊹</div> <div style="text-align: center; font-size: 14px;">@brittanyventi</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Brittany Venti ♡⋆˙⊹. | Data | Brittany Venti ♡⋆˙⊹ | | --- | --- | | Tweets downloaded | 2437 | | Retweets | 463 | | Short tweets | 311 | | Tweets kept | 1663 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/x4s95ezp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @brittanyventi's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/c1zgpsy3) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/c1zgpsy3/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/brittanyventi') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
d2i-pti-iu/iris_test
d2i-pti-iu
2023-05-02T16:55:53Z
0
0
sklearn
[ "sklearn", "joblib", "tabular-classification", "region:us" ]
tabular-classification
2023-05-02T16:43:42Z
--- widget: structuredData: SepalLengthCm: - 5.1 - 4.9 - 6.2 SepalWidthCm: - 3.5 - 3 - 3.4 PetalLengthCm: - 1.4 - 1.4 - 5.4 PetalWidthCm: - 0.2 - 0.2 - 2.3 tags: - tabular-classification - sklearn --- ### How to use ```python import joblib from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split REPO_ID = "d2i-pti-iu/test_svc_model" FILENAME = "iris_svm.joblib" model = joblib.load("../d2i-pti-iu/iris_test/iris_svm.joblib") iris = load_iris() X = iris.data[:3] Y = iris.target[:3] labels = model.predict(X) ``` #### Eval ```python model.score(X, Y) ```
Arro94/nova-model-benchmark
Arro94
2023-05-02T16:50:00Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "sv", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-02T16:38:35Z
--- license: gpl-3.0 language: - sv pipeline_tag: text-classification --- Scores (avg. weighted) - Accuracy: 0.9007633587786259 - Precision: 0.9008606422369183 - Recall: 0.9007633587786259 - F1: 0.9007595035560719 Hyperparams - Max Seq Len: 45 - Batch Size: 16 - Learning Rate: 2e-5 - Epochs: 5 - Warmup Steps: 147 - Weight Decay: 0.01 - Save/Eval Strat: epoch
flochaz/oasst-sft-4-pythia-12b-epoch-3.5
flochaz
2023-05-02T16:47:05Z
5
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "sft", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-04-26T07:05:44Z
--- license: apache-2.0 language: - en tags: - sft pipeline_tag: text-generation widget: - text: <|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|> - text: <|prompter|>What's the Earth total population<|endoftext|><|assistant|> - text: <|prompter|>Write a story about future of AI development<|endoftext|><|assistant|> --- # Open-Assistant SFT-4 12B Model This is the 4th iteration English supervised-fine-tuning (SFT) model of the [Open-Assistant](https://github.com/LAION-AI/Open-Assistant) project. It is based on a Pythia 12B that was fine-tuned on human demonstrations of assistant conversations collected through the [https://open-assistant.io/](https://open-assistant.io/) human feedback web app before March 25, 2023. ## Model Details - **Developed by:** [Open-Assistant Contributors](https://open-assistant.io/) - **Model type:** Transformer-based Language Model - **Language:** English - **Finetuned from:** [EleutherAI / pythia-12b-deduped](https://huggingface.co/EleutherAI/pythia-12b-deduped) - **Code:** [Open-Assistant/model/model_training](https://github.com/LAION-AI/Open-Assistant/tree/main/model/model_training) - **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-04-03_andreaskoepf_oasst-sft-4-pythia-12b-epoch-3_5_sampling_noprefix_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json) - **License:** Apache 2.0 - **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord) ## Prompting Two special tokens are used to mark the beginning of user and assistant turns: `<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token. Input prompt example: ``` <|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|> ``` The input ends with the `<|assistant|>` token to signal that the model should start generating the assistant reply. ## Dev Details - wandb: https://wandb.ai/open-assistant/supervised-finetuning/runs/770a0t41 - base model: [andreaskoepf/pythia-12b-pre-2000](https://huggingface.co/andreaskoepf/pythia-12b-pre-2000) - checkpoint: 4000 steps command: `deepspeed trainer_sft.py --configs defaults reference-data reference-pythia-12b --cache_dir /home/ubuntu/data_cache --output_dir .saved/oasst-sft-3-pythia-12b-reference_2kpre --num_train_epochs 8 --residual_dropout 0.2 --deepspeed --use_flash_attention true --model_name andreaskoepf/pythia-12b-pre-2000` data: ``` reference-data: datasets: - oasst_export: lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" input_file_path: 2023-03-25_oasst_research_ready_synth_labels.jsonl.gz val_split: 0.05 - alpaca sort_by_length: false use_custom_sampler: false ``` pythia: ``` reference-pythia-12b: dtype: fp16 log_dir: "pythia_log_12b" learning_rate: 6e-6 model_name: EleutherAI/pythia-12b-deduped output_dir: pythia_model_12b weight_decay: 0.0 max_length: 2048 warmup_steps: 100 gradient_checkpointing: true gradient_accumulation_steps: 2 per_device_train_batch_size: 4 per_device_eval_batch_size: 4 eval_steps: 100 save_steps: 1000 num_train_epochs: 8 save_total_limit: 4 ``` zero config: ``` { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupDecayLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto", "total_num_steps": "auto" } }, "zero_optimization": { "stage": 2, "allgather_partitions": true, "allgather_bucket_size": 1e9, "overlap_comm": false, "reduce_scatter": true, "reduce_bucket_size": 1e9, "contiguous_gradients": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ```
ausboss/llama7b-wizardlm-unfiltered-lora
ausboss
2023-05-02T16:34:17Z
0
0
null
[ "region:us" ]
null
2023-05-02T15:35:18Z
Lora fine tune trained on this [dataset](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered) Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {data_point["instruction"]} ### Response: {data_point["output"]}
markeidsaune/Reinforce-CartPole-v1
markeidsaune
2023-05-02T16:25:38Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-02T16:02:51Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
duncan93/video
duncan93
2023-05-02T16:04:21Z
0
0
asteroid
[ "asteroid", "chemistry", "biology", "music", "medical", "code", "text-to-video", "an", "dataset:OpenAssistant/oasst1", "license:openrail", "region:us" ]
text-to-video
2023-05-02T16:01:59Z
--- license: openrail datasets: - OpenAssistant/oasst1 language: - an metrics: - bleu library_name: asteroid pipeline_tag: text-to-video tags: - chemistry - biology - music - medical - code ---
Rinnz26/ahorhehe
Rinnz26
2023-05-02T15:35:17Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-02T15:32:29Z
--- license: creativeml-openrail-m ---
irow/qlearning-taxi-v2
irow
2023-05-02T15:33:27Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-02T15:22:07Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: qlearning-taxi-v2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="irow/qlearning-taxi-v2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
soumya13/GPT2_CleanDesc_MAKE_v1.5
soumya13
2023-05-02T15:22:50Z
5
0
transformers
[ "transformers", "tf", "gpt2", "text-classification", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-02T00:28:52Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: soumya13/GPT2_CleanDesc_MAKE_v1.5 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # soumya13/GPT2_CleanDesc_MAKE_v1.5 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0052 - Validation Loss: 0.0002 - Train Accuracy: 1.0 - Epoch: 24 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7600, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 4.0208 | 1.7346 | 0.3590 | 0 | | 1.5028 | 0.9649 | 0.6923 | 1 | | 0.8469 | 0.4756 | 0.9231 | 2 | | 0.4664 | 0.1764 | 0.9231 | 3 | | 0.2632 | 0.0836 | 0.9744 | 4 | | 0.1579 | 0.0488 | 0.9744 | 5 | | 0.1298 | 0.0250 | 1.0 | 6 | | 0.0962 | 0.0136 | 1.0 | 7 | | 0.0498 | 0.0041 | 1.0 | 8 | | 0.0520 | 0.0022 | 1.0 | 9 | | 0.0418 | 0.0016 | 1.0 | 10 | | 0.0403 | 0.0013 | 1.0 | 11 | | 0.0281 | 0.0009 | 1.0 | 12 | | 0.0236 | 0.0008 | 1.0 | 13 | | 0.0150 | 0.0008 | 1.0 | 14 | | 0.0173 | 0.0007 | 1.0 | 15 | | 0.0160 | 0.0005 | 1.0 | 16 | | 0.0302 | 0.0004 | 1.0 | 17 | | 0.0250 | 0.0003 | 1.0 | 18 | | 0.0069 | 0.0003 | 1.0 | 19 | | 0.0241 | 0.0003 | 1.0 | 20 | | 0.0100 | 0.0003 | 1.0 | 21 | | 0.0114 | 0.0002 | 1.0 | 22 | | 0.0172 | 0.0002 | 1.0 | 23 | | 0.0052 | 0.0002 | 1.0 | 24 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
irow/q-FrozenLake-v1-4x4-noSlippery
irow
2023-05-02T15:19:41Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-02T15:19:39Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="irow/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
SerafalL1/Lesleymlbb
SerafalL1
2023-05-02T15:13:59Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-02T15:08:58Z
--- license: creativeml-openrail-m ---
Sjdan/switch_vl_2
Sjdan
2023-05-02T14:57:54Z
95
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-02T14:18:46Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: switch_vl_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # switch_vl_2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 25 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
ratish/DBERT_Fault_LR_v2.1
ratish
2023-05-02T14:56:59Z
49
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-02T14:39:55Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: ratish/DBERT_Fault_LR_v2.1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ratish/DBERT_Fault_LR_v2.1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1501 - Validation Loss: 0.6305 - Train Accuracy: 0.7179 - Epoch: 29 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-06, 'decay_steps': 9120, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.6963 | 0.6916 | 0.5128 | 0 | | 0.6774 | 0.6929 | 0.5128 | 1 | | 0.6631 | 0.7000 | 0.5128 | 2 | | 0.6580 | 0.7070 | 0.5128 | 3 | | 0.6409 | 0.7104 | 0.5128 | 4 | | 0.6296 | 0.7015 | 0.5128 | 5 | | 0.6115 | 0.6866 | 0.5128 | 6 | | 0.5940 | 0.6573 | 0.5897 | 7 | | 0.5616 | 0.6263 | 0.5897 | 8 | | 0.5230 | 0.5886 | 0.6667 | 9 | | 0.4890 | 0.5608 | 0.7179 | 10 | | 0.4523 | 0.5386 | 0.7436 | 11 | | 0.4307 | 0.5424 | 0.7179 | 12 | | 0.4013 | 0.5261 | 0.7179 | 13 | | 0.3893 | 0.4976 | 0.7436 | 14 | | 0.3634 | 0.5459 | 0.6923 | 15 | | 0.3337 | 0.4893 | 0.7436 | 16 | | 0.3243 | 0.5490 | 0.7179 | 17 | | 0.3083 | 0.5091 | 0.7179 | 18 | | 0.2815 | 0.5457 | 0.7179 | 19 | | 0.2654 | 0.5692 | 0.7179 | 20 | | 0.2535 | 0.4808 | 0.7436 | 21 | | 0.2504 | 0.5912 | 0.6923 | 22 | | 0.2132 | 0.6228 | 0.6923 | 23 | | 0.1962 | 0.5834 | 0.7179 | 24 | | 0.2136 | 0.5261 | 0.7692 | 25 | | 0.1895 | 0.6210 | 0.7179 | 26 | | 0.1722 | 0.7140 | 0.7179 | 27 | | 0.1580 | 0.6532 | 0.6923 | 28 | | 0.1501 | 0.6305 | 0.7179 | 29 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
Pouheng/aipeheng572
Pouheng
2023-05-02T14:53:34Z
0
0
adapter-transformers
[ "adapter-transformers", "chemistry", "aa", "ay", "dataset:OpenAssistant/oasst1", "license:openrail", "region:us" ]
null
2023-04-28T09:20:56Z
--- license: openrail datasets: - OpenAssistant/oasst1 language: - aa - ay metrics: - accuracy library_name: adapter-transformers tags: - chemistry ---
WilliamADSP/ppo-SnowballTarget
WilliamADSP
2023-05-02T14:42:50Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-05-02T14:42:44Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Find your model_id: WilliamADSP/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ratish/DBERT_Fault_LR_v2
ratish
2023-05-02T14:37:23Z
46
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-02T14:28:12Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: ratish/DBERT_Fault_LR_v2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ratish/DBERT_Fault_LR_v2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4241 - Validation Loss: 0.5145 - Train Accuracy: 0.7692 - Epoch: 14 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-06, 'decay_steps': 2128, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.6872 | 0.6932 | 0.5128 | 0 | | 0.6761 | 0.6965 | 0.5128 | 1 | | 0.6638 | 0.7020 | 0.5128 | 2 | | 0.6590 | 0.7093 | 0.5128 | 3 | | 0.6532 | 0.7112 | 0.5128 | 4 | | 0.6469 | 0.7067 | 0.5128 | 5 | | 0.6359 | 0.7101 | 0.5128 | 6 | | 0.6236 | 0.7103 | 0.5128 | 7 | | 0.6120 | 0.6865 | 0.5641 | 8 | | 0.5913 | 0.6576 | 0.5641 | 9 | | 0.5618 | 0.6206 | 0.5897 | 10 | | 0.5286 | 0.5765 | 0.5897 | 11 | | 0.4931 | 0.5502 | 0.6667 | 12 | | 0.4574 | 0.5154 | 0.7692 | 13 | | 0.4241 | 0.5145 | 0.7692 | 14 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
LLMs/AlpacaGPT4-LoRA-13B-elina
LLMs
2023-05-02T14:08:35Z
0
12
null
[ "llama", "llm", "text2text-generation", "license:apache-2.0", "region:us" ]
text2text-generation
2023-05-01T13:51:16Z
--- license: apache-2.0 pipeline_tag: text2text-generation tags: - llama - llm --- This is LoRA checkpoint fine-tuned with the following CLI. The fine-tuning process is logged in [W&B dashboard](https://wandb.ai/chansung18/alpaca_lora/runs/pgvpsy2x?workspace=user-chansung18). I have used DGX workstation with 8 x A100(40G). ```console python finetune.py \ --base_model='elinas/llama-13b-hf-transformers-4.29' \ --data_path='alpaca_data_gpt4.json' \ --num_epochs=10 \ --cutoff_len=1024 \ --group_by_length \ --output_dir='./lora-alpacagpt4-13b-elinas' \ --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \ --lora_r=16 \ --lora_alpha=32 \ --batch_size=1024 \ --micro_batch_size=40 ``` This LoRA checkpoint is recommended to be used with `transformers >= 4.29` which should be installed with the following command currently(4/30/2023). ```console pip install git+https://github.com/huggingface/transformers.git ``` Please find out the example usage and results in the accompanied [notebook](https://huggingface.co/LLMs/AlpacaGPT4-LoRA-13B-elina/blob/main/alpacagpt4-lora-13b.ipynb). For the example outputs are listed below as well. **1. Instruction: Compose an engaging travel blog post about a recent trip to Hawaii, highlighting cultural experiences and must-see attractions.** I recently took the journey of my lifetime - a weeklong adventure through one of nature's most beautiful creations: Hawaii! I had heard so much about this magical island paradise before going but nothing could prepare me for how truly breathtaking it was in real life. From hiking along verdant mountain trails to swimming with turtles beneath clear turquoise waters, every moment felt like something out of a fairytale. But above all else, what made our visit to Hawaii so memorable were its rich culture and traditions that we experienced firsthand. One of the highlights of our time there was learning more about Native Hawaiian culture at Puuhonua O Honaunau National Historical Park, also known as "the place of refuge." Here, ancient laws were upheld which allowed wrongdoers who broke certain rules or protocols (like stealing) the chance for redemption if they reached the sacred grounds within 24 hours. This history is fascinating to learn about, particularly since many aspects of traditional Hawaiian culture are still very alive today. The park offers guided tours led by passionate volunteer tour guides who give great insight into the significance behind each feature on site, such as the sacred temple and carved stones called ki‘is. You can even participate in ceremonies held here on special occasions throughout the year. It's not often you get to connect with a destination’s history and heritage in such an intimate way – definitely don’t miss out on this unique opportunity when planning your own trip! Another unforgettable experience was enjoying some local cuisine. We tried shave ice and malasadas from the famous Leonard's Bakery right near the airport after landing in Honolulu. These iconic desserts have been delighting visitors for years, and it doesn't take long to see why – the delicate flavors of strawberry, vanilla, and coconut will leave you craving more. Later during our stay, we headed to Old Lahaina Luau located just outside Maui to partake in their signature Polynesian feast. Enjoy live entertainment while dining on Kalua pig cooked underground for eight whole hours and poi prepared using sweet potatoes instead of kalo. To top things off, indulge in fresh fruits like mangoes, papaya, and pineapples, sourced straight from Hawaiian farms and delivered daily. Overall, it was the perfect introduction to Hawaiian food, full of bright colors, complex textures, and bold flavor combinations that left us feeling happy and satisfied. The final experience that really stood out for me was visiting Pipiwai Trail at Kipuka Mahoe State Monument, accessible via Waimoku Falls Trailhead. While driving up winding roads flanked by lush green jungle foliage, we couldn't help but be awestruck by the beauty surrounding us. Eventually reaching the trailhead, we embarked upon the hike itself, passing by cascading waterfalls, peaceful pools, and towering bamboos. After several miles, we arrived at a large open field filled with colorful flowers, where we saw locals setting up picnic blankets alongside wild chickens pecking away at seeds scattered around them. There's no better way to end a day than relaxing amidst nature and taking in everything she has to offer. We spent over three hours exploring the entire area before making our way back down, grateful for the serene atmosphere and stunning scenery we had encountered along the way. Overall, our trip to Hawaii lived up to expectations and then some! Every aspect of this wonderful destination was beyond captivating - from natural landscapes to vibrant cultures and cuisines - making it impossible not to fall in love. So if you ever find yourself dreaming of escape to a tropical paradise, book your ticket now because once you go, you won’t want to come home. Aloha! **2. Instruction: Can you help me write a formal email to a potential business partner proposing a joint venture?** Dear [Potential Business Partner], Greetings! I am writing this message on behalf of my company, which specializes in providing market analysis services. Our aim is to use our extensive experience and knowledge in the industry to deliver exceptional results for customers worldwide. We are seeking new opportunities to expand our operations by forming strategic partnerships with companies like yours that share similar objectives. It was brought to my attention that your organization has a proven track record in customer satisfaction and quality assurance within its own area of expertise. This makes us believe there may be significant synergies between both entities when it comes to working together towards mutually beneficial outcomes. I propose we explore the possibilities of a long-term cooperation that would allow each party to bring their unique skill sets into play while taking advantage of the other's strengths. Our goal will be to provide top notch analysis of current trends and emerging markets, while also bringing about improved efficiency to all our clients. Your input and involvement will no doubt add value to such an endeavor. With these thoughts in mind, let’s get started exploring what could become an effective collaboration opportunity for both parties involved. Please feel free to contact me anytime if you have questions or need additional information from my end. **3. Instruction: What factors would you consider when designing an inclusive and accessible public transportation system?** When designing a more inclusive and accessible public transit, there are several key components to take into account. These include: 1) Physical accessibility for people with disabilities such as wheelchair ramps or lifts in vehicles; 2) Communications systems that provide information on routes, schedules, stops and alerts in multiple formats (e.g., braille, audio, large print); 3) Safe and secure infrastructure including cameras, lighting, security staff/services at bus stations and trains stations; 4) Service animals policy – ensuring people travel with their service animals without discrimination; 5) Accessible fare payment options - this includes physical ticket vending machines which can be used by visually impaired customers, pre-paid cards and cashless payments (such as NFC/contactless cards), mobile app integrating with the ticket validation process etc.; 6) Information about how the transportation network functions and relevant rules e.g., boarding etiquette and protocol, behaviour guidelines during emergencies or evacuations, policies around bringing food and drink onto the vehicle, luggage limitations etc.. 7) Training of all personnel working within the transit agency on providing services to passengers with various needs, sensitivity towards individuals with disability, awareness of safety measures and protocols during crisis situations involving persons with disability etc. **4. Instruction: What are some potential implications of using a single-use plastic bottle versus a reusable bottle on both the environment and human health?** Using a single-use plastic bottle instead of a reusable one has various negative impacts on our ecosystem, as well as public health. Some key concerns are outlined below: 1. Plastics waste in landfills - The accumulation of single use water bottles is rapidly growing, with an estimated 30 billion being thrown away every year worldwide [1]. In addition to the obvious visual pollution that accompanies discarded plastics littering the streets, these objects take up valuable space in already overcrowded landfill sites, where they may remain for hundreds or thousands of years before breaking down into smaller pieces [2][3]. Once released into nature from the site, these microplastics can be carried by wind and rivers and eventually make their way into the ocean, harming aquatic life and entering the food chain [4]. Even when incinerated, burning plastics releases hazardous chemical compounds such as benzo(a)pyrene, known carcinogens[5], and polycyclic aromatic hydrocarbons (PAHs), which are mutagenic and toxic to humans at high doses.[6] Thus, the presence of large quantities of plastics not only detract from the natural beauty of surroundings but also pose significant risks to the health of living organisms and ultimately us. 2. Petroleum usage - The production of most plastics requires crude oil, a finite resource which is extracted through dangerous methods like hydraulic fracturing ('fracking')[7],[8].[9] Additionally, petrochemical manufacture contributes significantly to air pollutants including carbon dioxide emissions10, nitrous oxides ([NOX])and sulfur dioxides([SO2]). These greenhouse gases contribute to climate change, causing sea level rise, more intense storms, droughts and heat waves[11]. Given its ubiquity across industries and products, reducing plastic consumption represents an important step toward mitigating the damage caused by fossil fuels extraction and utilization. Furthermore, lessening demand for plastic items will reduce pressure on companies to seek new sources of raw material, potentially curbing deforestation[12]. 3. Chemical leaching - Many types of plastics contain additives like Bisphenol A (BPAs)[13]that have been linked to hormonal imbalances, infertility and even certain types of cancer upon chronic exposure [14]. Reports show that BPS, used as a replacement for BPA in some consumer goods,[15]is just as harmful[16]; thus the cycle continues. With each use and subsequent disposal, the chemical components within the plastic migrate, seeping into soil and groundwater and finally finding their way into our bodies either via ingestion or contact with skin or breathing. Ultimately, this creates greater risk for long term disease development. 4. Impact on wildlife - Animals often mistake plastic debris for food, consuming it without realizing the danger[17]. This results in obstructions, injuries and death due to starvation. For instance, fish commonly mistake bits of floating plastic for zooplankton, which they feed on, mistaking them for actual prey [18]. Marine animals and birds consume pieces of plastic mistakenly thinking they're jellyfish [19][20], clogging vital organs. Turtles are particularly prone to entanglement and suffocation because plastic bags resemble Jellyfish [21], while balloons appear attractive to monkeys who unknowingly swallow them, leading to painful digestive issues [22][23]. All in all, plastics cause unnecessary suffering to innocent creatures simply trying to survive. In conclusion, although single-use plastic bottles seem convenient at first glance, their effects on our planet and health far outweigh any benefits. By making responsible choices and switching to alternatives like stainless steel, glass or aluminium, we can collectively help slow the pace of environmental destruction and improve overall quality of life for ourselves and future generations.
dylwil3/poca-SoccerTwos
dylwil3
2023-05-02T14:07:31Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-05-02T14:05:33Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: dylwil3/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
hanafuusen2001/ChikMix
hanafuusen2001
2023-05-02T13:35:40Z
0
10
null
[ "license:other", "region:us" ]
null
2023-04-07T07:46:06Z
--- license: other --- # 聲明 Disclaimer 本資料夾中的模型不是我所製作,版權歸原作者所有(各模型版權詳見 http://www.civitai.com 所示)。我上傳至本資料夾僅爲方便在綫抽取資源,并非盈利。 The models in this folder are not made by me, and the copyright belongs to the original author (see http://www.civitai.com for details on the copyright of each model). I uploaded to this folder only for the convenience of extracting resources online, not for profit. # 模型列表 List of Models 本資料夾中所有模型詳見下表。 All the models in this folder are detailed in the table below. | 模型名稱 Model Name | Civitai 頁面鏈接 Civitai Page Link | Civitai 下載鏈接 Civitai Download Link | |----------------------|--------------------|--------------------| |chikmix_v3.safetensors |https://civitai.com/models/9871?modelVersionId=59409 |https://civitai.com/api/download/models/59409 | |chikmix_v2.safetensors |https://civitai.com/models/9871?modelVersionId=20663 |https://civitai.com/api/download/models/20663 | |chikmix_v1.safetensors |https://civitai.com/models/9871?modelVersionId=11727 |https://civitai.com/api/download/models/11727 | |chikmix2d_v1.ckpt |https://civitai.com/models/16808/chikmix2d |https://civitai.com/api/download/models/19838 | # chikmix_v3 <img src="" width="" height=""> # chikmix_v2 <img src="" width="" height=""> # chikmix_v1 <img src="" width="" height=""> # chikmix2d_v1 <img src="" width="" height="">
sinword/autotrain-face_de-identification-54735127998
sinword
2023-05-02T13:34:19Z
149
2
transformers
[ "transformers", "pytorch", "vit", "image-classification", "autotrain", "vision", "dataset:sinword/autotrain-data-face_de-identification", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-02T13:28:27Z
--- tags: - autotrain - vision - image-classification datasets: - sinword/autotrain-data-face_de-identification widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 2.3215177460098726 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 54735127998 - CO2 Emissions (in grams): 2.3215 ## Validation Metrics - Loss: 0.032 - Accuracy: 0.993 - Macro F1: 0.990 - Micro F1: 0.993 - Weighted F1: 0.993 - Macro Precision: 0.990 - Micro Precision: 0.993 - Weighted Precision: 0.993 - Macro Recall: 0.990 - Micro Recall: 0.993 - Weighted Recall: 0.993
shehan97/mobilevitv2-1.0-voc-deeplabv3
shehan97
2023-05-02T13:31:27Z
834
0
transformers
[ "transformers", "pytorch", "mobilevitv2", "vision", "image-segmentation", "arxiv:2206.02680", "arxiv:1706.05587", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2023-05-02T09:36:36Z
--- license: other library_name: transformers tags: - vision - image-segmentation --- # MobileViTv2 + DeepLabv3 (shehan97/mobilevitv2-1.0-voc-deeplabv3) <!-- Provide a quick summary of what the model is/does. --> MobileViTv2 model pre-trained on PASCAL VOC at resolution 512x512. It was introduced in [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) by Sachin Mehta and Mohammad Rastegari, and first released in [this](https://github.com/apple/ml-cvnets) repository. The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE). Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team. ### Model Description <!-- Provide a longer summary of what this model is. --> MobileViTv2 is constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention. The model in this repo adds a [DeepLabV3](https://arxiv.org/abs/1706.05587) head to the MobileViT backbone for semantic segmentation. ### Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilevitv2) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import MobileViTv2FeatureExtractor, MobileViTv2ForSemanticSegmentation from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MobileViTv2FeatureExtractor.from_pretrained("shehan97/mobilevitv2-1.0-voc-deeplabv3") model = MobileViTv2ForSemanticSegmentation.from_pretrained("shehan97/mobilevitv2-1.0-voc-deeplabv3") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits predicted_mask = logits.argmax(1).squeeze(0) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The MobileViT + DeepLabV3 model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes, and then fine-tuned on the [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/) dataset. ### BibTeX entry and citation info ```bibtex @inproceedings{vision-transformer, title = {Separable Self-attention for Mobile Vision Transformers}, author = {Sachin Mehta and Mohammad Rastegari}, year = {2022}, URL = {https://arxiv.org/abs/2206.02680} } ```