modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-29 12:28:39
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
526 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-29 12:28:30
card
stringlengths
11
1.01M
climatebert/distilroberta-base-climate-f
climatebert
2023-05-04T13:05:20Z
1,072
36
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "climate", "en", "arxiv:2110.12010", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 tags: - climate --- # Model Card for distilroberta-base-climate-f ## Model Description This is the ClimateBERT language model based on the FULL-SELECT sample selection strategy. *Note: We generally recommend choosing this language model over those based on the other sample selection strategies (unless you have good reasons not to). This is also the only language model we will update from time to time.* Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pre-trained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010). *Update September 2, 2022: Now additionally pre-trained on an even larger text corpus, comprising >2M paragraphs. If you are looking for the language model before the update (i.e. for reproducibility), just use an older commit like [6be4fbd](https://huggingface.co/climatebert/distilroberta-base-climate-f/tree/6be4fbd3fedfd78ccb3c730c1f166947fbc940ba).* ## Climate performance model card | distilroberta-base-climate-f | | |--------------------------------------------------------------------------|----------------| | 1. Is the resulting model publicly available? | Yes | | 2. How much time does the training of the final model take? | 48 hours | | 3. How much time did all experiments take (incl. hyperparameter search)? | 350 hours | | 4. What was the power of GPU and CPU? | 0.7 kW | | 5. At which geo location were the computations performed? | Germany | | 6. What was the energy mix at the geo location? | 470 gCO2eq/kWh | | 7. How much CO2eq was emitted to train the final model? | 15.79 kg | | 8. How much CO2eq was emitted for all experiments? | 115.15 kg | | 9. What is the average CO2eq emission for the inference of one sample? | 0.62 mg | | 10. Which positive environmental impact can be expected from this work? | This work can be categorized as a building block tools following Jin et al (2021). It supports the training of NLP models in the field of climate change and, thereby, have a positive environmental impact in the future. | | 11. Comments | Block pruning could decrease CO2eq emissions | ## Citation Information ```bibtex @inproceedings{wkbl2022climatebert, title={{ClimateBERT: A Pretrained Language Model for Climate-Related Text}}, author={Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus}, booktitle={Proceedings of AAAI 2022 Fall Symposium: The Role of AI in Responding to Climate Challenges}, year={2022}, doi={https://doi.org/10.48550/arXiv.2212.13631}, } ```
climatebert/distilroberta-base-climate-d-s
climatebert
2023-05-04T13:05:02Z
135
3
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "climate", "en", "arxiv:2110.12010", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 tags: - climate --- # Model Card for distilroberta-base-climate-d-s ## Model Description This is the ClimateBERT language model based on the DIV-SELECT and SIM-SELECT sample selection strategy. *Note: We generally recommend choosing the [distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) language model over this language model (unless you have good reasons not to).* Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pre-trained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010). ## Climate performance model card | distilroberta-base-climate-d-s | | |--------------------------------------------------------------------------|----------------| | 1. Is the resulting model publicly available? | Yes | | 2. How much time does the training of the final model take? | 48 hours | | 3. How much time did all experiments take (incl. hyperparameter search)? | 350 hours | | 4. What was the power of GPU and CPU? | 0.7 kW | | 5. At which geo location were the computations performed? | Germany | | 6. What was the energy mix at the geo location? | 470 gCO2eq/kWh | | 7. How much CO2eq was emitted to train the final model? | 15.79 kg | | 8. How much CO2eq was emitted for all experiments? | 115.15 kg | | 9. What is the average CO2eq emission for the inference of one sample? | 0.62 mg | | 10. Which positive environmental impact can be expected from this work? | This work can be categorized as a building block tools following Jin et al (2021). It supports the training of NLP models in the field of climate change and, thereby, have a positive environmental impact in the future. | | 11. Comments | Block pruning could decrease CO2eq emissions | ## Citation Information ```bibtex @inproceedings{wkbl2022climatebert, title={{ClimateBERT: A Pretrained Language Model for Climate-Related Text}}, author={Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus}, booktitle={Proceedings of AAAI 2022 Fall Symposium: The Role of AI in Responding to Climate Challenges}, year={2022}, doi={https://doi.org/10.48550/arXiv.2212.13631}, } ```
Theju/switch_low_b1_2
Theju
2023-05-04T12:47:46Z
103
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-04T12:45:06Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: switch_low_b1_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # switch_low_b1_2 This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 25 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
DmitriyVasiliev/autotrain-mbart-rua-par-and-sent-55389129134
DmitriyVasiliev
2023-05-04T12:35:42Z
122
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "autotrain", "summarization", "unk", "dataset:DmitriyVasiliev/autotrain-data-mbart-rua-par-and-sent", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-05-04T12:22:24Z
--- tags: - autotrain - summarization language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - DmitriyVasiliev/autotrain-data-mbart-rua-par-and-sent co2_eq_emissions: emissions: 5.124794195879908 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 55389129134 - CO2 Emissions (in grams): 5.1248 ## Validation Metrics - Loss: 0.777 - Rouge1: 8.583 - Rouge2: 2.417 - RougeL: 8.622 - RougeLsum: 8.558 - Gen Len: 21.878 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/DmitriyVasiliev/autotrain-mbart-rua-par-and-sent-55389129134 ```
DataVare/datavare-nsf-to-pst-converter
DataVare
2023-05-04T12:25:25Z
0
0
null
[ "region:us" ]
null
2023-05-04T12:15:49Z
Users may easily use this capability on their PC using the DataVare NSF to PST converter software, which is a very effective tool for them. The utility operates without any data loss on Windows platforms. With the NSF to PST Converter, the user can convert their file with ease. By converting 10 objects, the app allows users to test it out for free. Those with and without technical expertise can both utilize this program. The function is accessible in any version of Outlook. It may be used with MS Outlook versions 2003, 2007, 2010, 2013, 2016, and 2019, among others. Without using Outlook, you can bulk import NSF files into the PST file format. If they are having problems, it can be used by both novice and experienced users. Additionally, users are able to obtain the software and use it with any version of Windows. Read more :- https://www.datavare.com/software/nsf-to-pst-converter-expert.html
steveabecassis/huji_MediQA
steveabecassis
2023-05-04T12:17:44Z
3
0
transformers
[ "transformers", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-02-12T19:59:44Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: huji_MediQA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # huji_MediQA This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6868 - Rouge1: 0.1617 - Rouge2: 0.065 - Rougel: 0.1598 - Rougelsum: 0.1617 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | No log | 1.0 | 1 | 2.6868 | 0.1617 | 0.065 | 0.1598 | 0.1617 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.1 - Datasets 2.8.0 - Tokenizers 0.13.2
abhijitgayen/cogo-flan-t5
abhijitgayen
2023-05-04T12:10:55Z
106
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-04T11:28:35Z
# Cogo Flan-t5 Model This is a Fine-tune **FLAN-T5 model**, train with user-admin message [DataSet](https://huggingface.co/datasets/abhijitgayen/cogo_chat)
majic404/majicMIX
majic404
2023-05-04T11:08:24Z
0
22
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-04-27T15:49:56Z
--- license: creativeml-openrail-m ---
VinayakMane47/finetuned-en-to-mar
VinayakMane47
2023-05-04T10:59:22Z
62
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-04T10:34:51Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: VinayakMane47/finetuned-en-to-mar results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # VinayakMane47/finetuned-en-to-mar This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mr](https://huggingface.co/Helsinki-NLP/opus-mt-en-mr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5415 - Validation Loss: 1.2289 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 4401, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.2978 | 1.5919 | 0 | | 1.7627 | 1.3188 | 1 | | 1.5415 | 1.2289 | 2 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
WWWxp/wav2vec2_spoof_dection1
WWWxp
2023-05-04T10:59:02Z
209
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:asvspoof2019", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-04-22T08:34:10Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - asvspoof2019 model-index: - name: wav2vec2_spoof_dection1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2_spoof_dection1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the asvspoof2019 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 500 ### Framework versions - Transformers 4.28.1 - Pytorch 1.13.1 - Datasets 2.11.0 - Tokenizers 0.12.1
ibm-research/gpt-neo-125m-multiexit
ibm-research
2023-05-04T10:45:23Z
122
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "en", "dataset:cc100", "arxiv:2305.01628", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2023-04-29T11:50:17Z
--- license: mit datasets: - cc100 language: - en pipeline_tag: text-generation --- # GPT-Neo-125M Multi-Exit Pre-trained language model with identical parameters to [gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m), but with additional language modeling heads ("exits") connected to different layers of the model. These 6 additional heads (in layers 2, 4, 6, 8, 10, 12) were trained on the English portion of [CC-100](https://huggingface.co/datasets/cc100) while keeping the original pre-trained model parameters frozen. The model can be used for the _Autocontrastive Decoding_ text generation approach described in [Gera et al. 2023](https://arxiv.org/abs/2305.01628), for _early-exiting_ approaches, or for other algorithms that consider the next-token predictions of different model layers. ## Usage Harnessing the additional language modeling heads requires loading the model using the [auto-contrastive-generation library](https://github.com/IBM/auto-contrastive-generation) (`pip install autocontrastive-gen`). In a nutshell, the user creates a `MultiExitConfiguration` that determines model behavior at training and inference, and then loads the model using the dedicated `AutoMultiExitModel` class. After that, the model can be used with the `transformers` API like any other model. See the [GitHub](https://github.com/IBM/auto-contrastive-generation) for detailed usage instructions. For example, the code below initializes the model to use _Autocontrastive Decoding_, and then performs text generation in this chosen setting: ```python from transformers import AutoTokenizer from autocontrastive_gen.modeling.configuration import MultiExitConfiguration from autocontrastive_gen.modeling.auto_model import AutoMultiExitModel # initialize a pre-trained multi-exit model to use auto-contrast between layer 24 and layer 12 multi_exit_config = MultiExitConfiguration(use_original_head=False, contrast_layer_indices=(24, 12)) model = AutoMultiExitModel.from_pretrained("IBM/gpt-neo-125m-multiexit", multi_exit_config=multi_exit_config) # perform text generation as usual tokenizer = AutoTokenizer.from_pretrained("IBM/gpt-neo-125m-multiexit") prompt = tokenizer("humpty dumpty sat on", return_tensors='pt') generated_ids = model.generate(**prompt, max_new_tokens=15) print(tokenizer.batch_decode(generated_ids)) ``` ## Citation Ariel Gera, Roni Friedman, Ofir Arviv, Chulaka Gunasekara, Benjamin Sznajder, Noam Slonim and Eyal Shnarch. [The Benefits of Bad Advice: Autocontrastive Decoding across Model Layers](https://arxiv.org/abs/2305.01628). ACL 2023. ```bibtex @inproceedings{gera2023autocontrastive, title={The Benefits of Bad Advice: Autocontrastive Decoding across Model Layers}, author={Gera, Ariel and Friedman, Roni and Arviv, Ofir and Gunasekara, Chulaka and Sznajder, Benjamin and Slonim, Noam and Shnarch, Eyal}, booktitle={Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, month={july}, address={Toronto, Canada}, year={2023} } ```
HilbertS/dqn-SpaceInvadersNoFrameskip-v4
HilbertS
2023-05-04T10:28:56Z
7
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-04T10:28:16Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 639.00 +/- 224.01 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga HilbertS -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga HilbertS -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga HilbertS ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Cainiao-AI/GreedRL
Cainiao-AI
2023-05-04T10:17:08Z
0
24
null
[ "Deep Reinforcement Learning", "Combinatorial Optimization", "Vehicle Routing Problem", "reinforcement-learning", "license:apache-2.0", "region:us" ]
reinforcement-learning
2023-04-28T02:23:51Z
--- license: apache-2.0 pipeline_tag: reinforcement-learning tags: - Deep Reinforcement Learning - Combinatorial Optimization - Vehicle Routing Problem --- ![](./images/GREEDRL-Logo-Original-640.png) # 🤠GreedRL ## Overview - 🤠GreedRL is a Deep Reinforcement Learning (DRL) based solver that can solve various types of problems, such as TSP, VRPs (CVRP, VRPTW, VRPPD, etc), Order Batching Problem, Knapsack Problem, etc. - 🤠GreedRL achieves very high performance by running on GPU while generating high quality solutions. **1200 times faster** than [Google OR-Tools](https://developers.google.com/optimization) for large-scale (>=1000 nodes) CVRP, and the solution quality is improved by **about 3%**. ## 🏆Award - Entering the finalists of [INFORMS 2021 Franz Edelman Award](https://www.informs.org/Resource-Center/Video-Library/Edelman-Competition-Videos/2021-Edelman-Competition-Videos/2021-Edelman-Finalist-Alibaba) - Obtain [The Second Class Prize of Scientific and Technological Progress Award](https://www.ccf.org.cn/Awards/Awards/2022-11-08/776110.shtml). ## Editions We have delivered the following two editions of 🤠GreedRL for users. - **The Community Edition** is open source and available to [download](https://huggingface.co/Cainiao-AI/GreedRL). - **The Enterprise Edition** has a higher performance implementation than **The Community Edition** (about 50 times faster), especially when solving larg-scale problems. For more informations, please contact <a href="mailto:jiangwen.wjw@alibaba-inc.com">us</a>. ## Architecture ![](./images/GREEDRL-Framwork_en.png) ## COPs Modeling examples ### Standard problems #### Capacitated Vehicle Routing Problem (CVRP) <details> <summary>CVRP</summary> ```python from greedrl.feature import * from greedrl.variable import * from greedrl.function import * from greedrl import Problem, Solution, Solver from greedrl import runner features = [continuous_feature('task_demand'), continuous_feature('worker_weight_limit'), continuous_feature('distance_matrix'), variable_feature('distance_this_to_task'), variable_feature('distance_task_to_end')] variables = [task_demand_now('task_demand_now', feature='task_demand'), task_demand_now('task_demand_this', feature='task_demand', only_this=True), feature_variable('task_weight'), worker_variable('worker_weight_limit'), worker_used_resource('worker_used_weight', task_require='task_weight'), edge_variable('distance_last_to_this', feature='distance_matrix', last_to_this=True), edge_variable('distance_this_to_task', feature='distance_matrix', this_to_task=True), edge_variable('distance_task_to_end', feature='distance_matrix', task_to_end=True)] class Constraint: def do_task(self): return self.task_demand_this def mask_task(self): # 已经完成的任务 mask = self.task_demand_now <= 0 # 车辆容量限制 worker_weight_limit = self.worker_weight_limit - self.worker_used_weight mask |= self.task_demand_now * self.task_weight > worker_weight_limit[:, None] return mask def finished(self): return torch.all(self.task_demand_now <= 0, 1) class Objective: def step_worker_end(self): return self.distance_last_to_this def step_task(self): return self.distance_last_to_this ``` </details> #### Pickup and Delivery Problem with Time Windows (PDPTW) <details> <summary>PDPTW</summary> ```python from greedrl.model import runner from greedrl.feature import * from greedrl.variable import * from greedrl.function import * from greedrl import Problem, Solution, Solver features = [local_category('task_group'), global_category('task_priority', 2), variable_feature('distance_this_to_task'), variable_feature('distance_task_to_end')] variables = [task_demand_now('task_demand_now', feature='task_demand'), task_demand_now('task_demand_this', feature='task_demand', only_this=True), feature_variable('task_weight'), feature_variable('task_group'), feature_variable('task_priority'), feature_variable('task_due_time2', feature='task_due_time'), task_variable('task_due_time'), task_variable('task_service_time'), task_variable('task_due_time_penalty'), worker_variable('worker_basic_cost'), worker_variable('worker_distance_cost'), worker_variable('worker_due_time'), worker_variable('worker_weight_limit'), worker_used_resource('worker_used_weight', task_require='task_weight'), worker_used_resource('worker_used_time', 'distance_matrix', 'task_service_time', 'task_ready_time', 'worker_ready_time'), edge_variable('distance_last_to_this', feature='distance_matrix', last_to_this=True), edge_variable('distance_this_to_task', feature='distance_matrix', this_to_task=True), edge_variable('distance_task_to_end', feature='distance_matrix', task_to_end=True)] class Constraint: def do_task(self): return self.task_demand_this def mask_worker_end(self): return task_group_split(self.task_group, self.task_demand_now <= 0) def mask_task(self): mask = self.task_demand_now <= 0 mask |= task_group_priority(self.task_group, self.task_priority, mask) worker_used_time = self.worker_used_time[:, None] + self.distance_this_to_task mask |= (worker_used_time > self.task_due_time2) & (self.task_priority == 0) # 容量约束 worker_weight_limit = self.worker_weight_limit - self.worker_used_weight mask |= self.task_demand_now * self.task_weight > worker_weight_limit[:, None] return mask def finished(self): return torch.all(self.task_demand_now <= 0, 1) class Objective: def step_worker_start(self): return self.worker_basic_cost def step_worker_end(self): feasible = self.worker_used_time <= self.worker_due_time return self.distance_last_to_this * self.worker_distance_cost, feasible def step_task(self): worker_used_time = self.worker_used_time - self.task_service_time feasible = worker_used_time <= self.task_due_time feasible &= worker_used_time <= self.worker_due_time cost = self.distance_last_to_this * self.worker_distance_cost return torch.where(feasible, cost, cost + self.task_due_time_penalty), feasible ``` </details> #### VRP with Time Windows (VRPTW) <details> <summary>VRPTW</summary> ```python from greedrl import Problem, Solution, Solver from greedrl.feature import * from greedrl.variable import * from greedrl.function import * from greedrl.model import runner from greedrl.myenv import VrptwEnv features = [continuous_feature('worker_weight_limit'), continuous_feature('worker_ready_time'), continuous_feature('worker_due_time'), continuous_feature('worker_basic_cost'), continuous_feature('worker_distance_cost'), continuous_feature('task_demand'), continuous_feature('task_weight'), continuous_feature('task_ready_time'), continuous_feature('task_due_time'), continuous_feature('task_service_time'), continuous_feature('distance_matrix')] variables = [task_demand_now('task_demand_now', feature='task_demand'), task_demand_now('task_demand_this', feature='task_demand', only_this=True), feature_variable('task_weight'), feature_variable('task_due_time'), feature_variable('task_ready_time'), feature_variable('task_service_time'), worker_variable('worker_weight_limit'), worker_variable('worker_due_time'), worker_variable('worker_basic_cost'), worker_variable('worker_distance_cost'), worker_used_resource('worker_used_weight', task_require='task_weight'), worker_used_resource('worker_used_time', 'distance_matrix', 'task_service_time', 'task_ready_time', 'worker_ready_time'), edge_variable('distance_last_to_this', feature='distance_matrix', last_to_this=True), edge_variable('distance_this_to_task', feature='distance_matrix', this_to_task=True), edge_variable('distance_task_to_end', feature='distance_matrix', task_to_end=True)] class Constraint: def do_task(self): return self.task_demand_this def mask_task(self): # 已经完成的任务 mask = self.task_demand_now <= 0 # 车辆容量限制 worker_weight_limit = self.worker_weight_limit - self.worker_used_weight mask |= self.task_demand_now * self.task_weight > worker_weight_limit[:, None] worker_used_time = self.worker_used_time[:, None] + self.distance_this_to_task mask |= worker_used_time > self.task_due_time worker_used_time = torch.max(worker_used_time, self.task_ready_time) worker_used_time += self.task_service_time worker_used_time += self.distance_task_to_end mask |= worker_used_time > self.worker_due_time[:, None] return mask def finished(self): return torch.all(self.task_demand_now <= 0, 1) class Objective: def step_worker_start(self): return self.worker_basic_cost def step_worker_end(self): return self.distance_last_to_this * self.worker_distance_cost def step_task(self): return self.distance_last_to_this * self.worker_distance_cost ``` </details> #### Travelling Salesman Problem (TSP) <details> <summary>TSP</summary> ```python from greedrl.feature import * from greedrl.variable import * from greedrl import Problem from greedrl import runner features = [continuous_feature('task_location'), variable_feature('distance_this_to_task'), variable_feature('distance_task_to_end')] variables = [task_demand_now('task_demand_now', feature='task_demand'), task_demand_now('task_demand_this', feature='task_demand', only_this=True), edge_variable('distance_last_to_this', feature='distance_matrix', last_to_this=True), edge_variable('distance_this_to_task', feature='distance_matrix', this_to_task=True), edge_variable('distance_task_to_end', feature='distance_matrix', task_to_end=True), edge_variable('distance_last_to_loop', feature='distance_matrix', last_to_loop=True)] class Constraint: def do_task(self): return self.task_demand_this def mask_task(self): mask = self.task_demand_now <= 0 return mask def mask_worker_end(self): return torch.any(self.task_demand_now > 0, 1) def finished(self): return torch.all(self.task_demand_now <= 0, 1) class Objective: def step_worker_end(self): return self.distance_last_to_loop def step_task(self): return self.distance_last_to_this ``` </details> #### Split Delivery Vehicle Routing Problem (SDVRP) <details> <summary>SDVRP</summary> ```python from greedrl.feature import * from greedrl.variable import * from greedrl import Problem from greedrl import runner features = [continuous_feature('task_demand'), continuous_feature('worker_weight_limit'), continuous_feature('distance_matrix'), variable_feature('distance_this_to_task'), variable_feature('distance_task_to_end')] variables = [task_demand_now('task_demand'), task_demand_now('task_demand_this', feature='task_demand', only_this=True), feature_variable('task_weight'), task_variable('task_weight_this', feature='task_weight'), worker_variable('worker_weight_limit'), worker_used_resource('worker_used_weight', task_require='task_weight'), edge_variable('distance_last_to_this', feature='distance_matrix', last_to_this=True)] class Constraint: def do_task(self): worker_weight_limit = self.worker_weight_limit - self.worker_used_weight return torch.min(self.task_demand_this, worker_weight_limit // self.task_weight_this) def mask_task(self): mask = self.task_demand <= 0 worker_weight_limit = self.worker_weight_limit - self.worker_used_weight mask |= self.task_weight > worker_weight_limit[:, None] return mask def finished(self): return torch.all(self.task_demand <= 0, 1) class Objective: def step_worker_end(self): return self.distance_last_to_this def step_task(self): return self.distance_last_to_this ``` </details> ### Real-world scenario problems In addition to being able to solve standard problems, 🤠GreedRL can also model and solve real-world scenario problems, like *Instant Delivery Service* and *Order Batching Problem*. #### Instant Delivery Service > Instant Delivery Service are widespread in order dispatching systems of courier delivery services ([Ele.me](https://www.ele.me/), [Meituan](https://waimai.meituan.com/), [UUPaotui](https://www.uupt.com/index.htm), etc). > Orders are generated in real-time. A number of vehicles are scheduled to serve orders from pickup locations to delivery locations while respecting vehicle capacity. The objective consists in minimizing both total delivery time and overtime penalty. <details> <summary>Instant Delivery Service</summary> ```python from greedrl.feature import * from greedrl.variable import * from greedrl.function import * from greedrl import Problem from greedrl import runner features = [local_category('task_order'), global_category('task_type', 2), global_category('task_new_order', 2), variable_feature('time_this_to_task'), continuous_feature('x_time_matrix'), continuous_feature('task_due_time_x'), continuous_feature('worker_task_mask')] variables = [task_demand_now('task_demand_now', feature='task_demand'), task_demand_now('task_demand_this', feature='task_demand', only_this=True), task_variable('task_pickup_this', feature='task_pickup'), task_variable('task_due_time_this', feature='task_due_time'), feature_variable('task_order', feature='task_order'), feature_variable('task_type', feature='task_type'), feature_variable('task_new_pickup', feature='task_new_pickup'), feature_variable('worker_task_mask', feature='worker_task_mask'), worker_count_now('worker_count_now', feature='worker_count'), worker_variable('worker_min_old_task_this', feature='worker_min_old_task'), worker_variable('worker_max_new_order_this', feature='worker_max_new_order'), worker_variable('worker_task_mask_this', feature='worker_task_mask'), worker_used_resource('worker_used_old_task', task_require='task_old'), worker_used_resource('worker_used_new_order', task_require='task_new_pickup'), worker_used_resource('worker_used_time', edge_require='time_matrix'), edge_variable('time_this_to_task', feature='x_time_matrix', this_to_task=True)] class Constraint: def do_task(self): return self.task_demand_this def mask_worker_start(self): mask = self.worker_count_now <= 0 finished = self.task_demand_now <= 0 worker_task_mask = self.worker_task_mask | finished[:, None, :] mask |= torch.all(worker_task_mask, 2) return mask def mask_worker_end(self): mask = self.worker_used_old_task < self.worker_min_old_task_this mask |= task_group_split(self.task_order, self.task_demand_now <= 0) return mask def mask_task(self): mask = self.task_demand_now <= 0 mask |= task_group_priority(self.task_order, self.task_type, mask) worker_max_new_order = self.worker_max_new_order_this - self.worker_used_new_order mask |= self.task_new_pickup > worker_max_new_order[:, None] mask |= self.worker_task_mask_this return mask def finished(self): worker_mask = self.worker_count_now <= 0 task_mask = self.task_demand_now <= 0 worker_task_mask = worker_mask[:, :, None] | task_mask[:, None, :] worker_task_mask |= self.worker_task_mask batch_size = worker_task_mask.size(0) worker_task_mask = worker_task_mask.view(batch_size, -1) return worker_task_mask.all(1) class Objective: def step_task(self): over_time = (self.worker_used_time - self.task_due_time_this).clamp(min=0) pickup_time = self.worker_used_time * self.task_pickup_this return self.worker_used_time + over_time + pickup_time def step_finish(self): return self.task_demand_now.sum(1) * 1000 ``` </details> #### Order Batching Problem > The Order Batching Problem is an optimization problem which occurs in a warehouse consists of designing a set of picking batches, such that each customer order (composed by a list of items) is assigned to exactly one batch, > and each batch has to be collected by a single picker. The objective consists in minimizing both total batching cost (a weighted sum of used numbers of areas, roadways and items) and penalty for exceeding loading limits of pickers. <details> <summary>Order Batching Problem</summary> ```python from greedrl import Problem, Solver from greedrl.feature import * from greedrl.variable import * from greedrl import runner features = [local_feature('task_area'), local_feature('task_roadway'), local_feature('task_area_group'), sparse_local_feature('task_item_id', 'task_item_num'), sparse_local_feature('task_item_owner_id', 'task_item_num'), variable_feature('worker_task_item'), variable_feature('worker_used_roadway'), variable_feature('worker_used_area')] variables = [task_demand_now('task_demand_now', feature='task_demand'), task_demand_now('task_demand_this', feature='task_demand', only_this=True), feature_variable('task_item_id'), feature_variable('task_item_num'), feature_variable('task_item_owner_id'), feature_variable('task_area'), feature_variable('task_area_group'), feature_variable('task_load'), feature_variable('task_group'), worker_variable('worker_load_limit'), worker_variable('worker_area_limit'), worker_variable('worker_area_group_limit'), worker_task_item('worker_task_item', item_id='task_item_id', item_num='task_item_num'), worker_task_item('worker_task_item_owner', item_id='task_item_owner_id', item_num='task_item_num'), worker_used_resource('worker_used_load', task_require='task_load'), worker_used_resource('worker_used_area', task_require='task_area'), worker_used_resource('worker_used_roadway', task_require='task_roadway'), worker_used_resource('worker_used_area_group', task_require='task_area_group')] class Constraint: def do_task(self): return self.task_demand_this def mask_worker_end(self): return self.worker_used_load < self.worker_load_limit def mask_task(self): # completed tasks mask = self.task_demand_now <= 0 # mask |= task_group_priority(self.task_group, self.task_out_stock_time, mask) NT = self.task_item_id.size(1) worker_task_item = self.worker_task_item[:, None, :] worker_task_item = worker_task_item.expand(-1, NT, -1) task_item_in_worker = worker_task_item.gather(2, self.task_item_id.long()) task_item_in_worker = (task_item_in_worker > 0) & (self.task_item_num > 0) worker_task_item_owner = self.worker_task_item_owner[:, None, :] worker_task_item_owner = worker_task_item_owner.expand(-1, NT, -1) task_item_owner_in_worker = worker_task_item_owner.gather(2, self.task_item_owner_id.long()) task_item_owner_in_worker = (task_item_owner_in_worker > 0) & (self.task_item_num > 0) # mask |= torch.any(task_item_in_worker & ~task_item_owner_in_worker, 2) worker_load_limit = self.worker_load_limit - self.worker_used_load mask |= (self.task_load > worker_load_limit[:, None]) task_area = self.task_area + self.worker_used_area[:, None, :] task_area_num = task_area.clamp(0, 1).sum(2, dtype=torch.int32) mask |= (task_area_num > self.worker_area_limit[:, None]) tak_area_group = self.task_area_group + self.worker_used_area_group[:, None, :] tak_area_group_num = tak_area_group.clamp(0, 1).sum(2, dtype=torch.int32) mask |= (tak_area_group_num > self.worker_area_group_limit[:, None]) return mask def finished(self): return torch.all(self.task_demand_now <= 0, 1) class Objective: def step_worker_end(self): area_num = self.worker_used_area.clamp(0, 1).sum(1) roadway_num = self.worker_used_roadway.clamp(0, 1).sum(1) item_num = self.worker_task_item.clamp(0, 1).sum(1) penalty = (self.worker_load_limit - self.worker_used_load) * 10 return area_num * 100 + roadway_num * 10 + item_num + penalty ``` </details> # # # Getting started ## Description We are delighted to release 🤠GreedRL Community Edition, as well as example of training and testing scripts for the standard Capacitated VRP (CVRP), you can download it and get started. ## Test environment 🤠GreedRL Community Edition has been tested on Ubuntu 18.04 with GCC compiler v7.5.0 and CUDA version 11.4, and a [Miniconda](https://docs.conda.io/en/latest/miniconda.html#system-requirements) distribution with Python 3.8. We recommend using a similar configuration to avoid any possiblem compilation issue. ## Installation First, clone the repository. ```aidl $ git clone https://huggingface.co/Cainiao-AI/GreedRL ``` Then, create and activate a python environment using conda, and install required packages. ```aidl $ conda create -n python38 python==3.8 $ source activate python38 $ pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu113 ``` Finally, compile and add the resulting library `greedrl` to the `PYTHONPATH` ```aidl $ python setup.py build $ export PYTHONPATH={your_current_path}/build/lib.linux-x86_64-cpython-38/:$PYTHONPATH ``` ## CVRP Training 1. Training data We use generated data for the training phase, the customers and depot locations are randomly generated in the unit square [0,1] X [0,1]. For CVRP, we assume that the demand of each node is a discrete number in {1,...,9}, chosen uniformly at random, and each vehicle has a default capacity of 50. 2. Start training ```python $ cd examples/cvrp $ python train.py --model_filename cvrp_100.pt --problem_size 100 ``` ## CVRP Testing After training process, you'll get a trained model, like `cvrp_100.pt`, that you can use for test. ```python $ cd examples/cvrp $ python solve.py --device cpu --model_name cvrp_100.pt --problem_size 100 ``` # Support We look forward you to downloading it, using it, and opening discussion if you encounter any problems or have ideas on building an even better experience. For commercial enquiries, please contact <a href="mailto:jiangwen.wjw@alibaba-inc.com">us</a>. # Citation ``` @article{hu2022alibaba, title={Alibaba vehicle routing algorithms enable rapid pick and delivery}, author={Hu, Haoyuan and Zhang, Ying and Wei, Jiangwen and Zhan, Yang and Zhang, Xinhui and Huang, Shaojian and Ma, Guangrui and Deng, Yuming and Jiang, Siwei}, journal={INFORMS Journal on Applied Analytics}, volume={52}, number={1}, pages={27--41}, year={2022}, publisher={INFORMS} } ```
blackeys/ppo-LunarLanderV2
blackeys
2023-05-04T09:59:58Z
5
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-04T09:00:38Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 244.64 +/- 22.66 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
botp/sd-vae-ft-mse-original
botp
2023-05-04T09:35:58Z
0
1
null
[ "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:mit", "region:us" ]
text-to-image
2023-05-04T09:35:58Z
--- license: mit tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image inference: false duplicated_from: stabilityai/sd-vae-ft-mse-original --- # Improved Autoencoders ## Utilizing These weights are intended to be used with the original [CompVis Stable Diffusion codebase](https://github.com/CompVis/stable-diffusion). If you are looking for the model to use with the 🧨 diffusers library, [come here](https://huggingface.co/CompVis/stabilityai/sd-vae-ft-ema). ## Decoder Finetuning We publish two kl-f8 autoencoder versions, finetuned from the original [kl-f8 autoencoder](https://github.com/CompVis/latent-diffusion#pretrained-autoencoding-models) on a 1:1 ratio of [LAION-Aesthetics](https://laion.ai/blog/laion-aesthetics/) and LAION-Humans, an unreleased subset containing only SFW images of humans. The intent was to fine-tune on the Stable Diffusion training set (the autoencoder was originally trained on OpenImages) but also enrich the dataset with images of humans to improve the reconstruction of faces. The first, _ft-EMA_, was resumed from the original checkpoint, trained for 313198 steps and uses EMA weights. It uses the same loss configuration as the original checkpoint (L1 + LPIPS). The second, _ft-MSE_, was resumed from _ft-EMA_ and uses EMA weights and was trained for another 280k steps using a different loss, with more emphasis on MSE reconstruction (MSE + 0.1 * LPIPS). It produces somewhat ``smoother'' outputs. The batch size for both versions was 192 (16 A100s, batch size 12 per GPU). To keep compatibility with existing models, only the decoder part was finetuned; the checkpoints can be used as a drop-in replacement for the existing autoencoder.. _Original kl-f8 VAE vs f8-ft-EMA vs f8-ft-MSE_ ## Evaluation ### COCO 2017 (256x256, val, 5000 images) | Model | train steps | rFID | PSNR | SSIM | PSIM | Link | Comments |----------|---------|------|--------------|---------------|---------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------| | | | | | | | | | | original | 246803 | 4.99 | 23.4 +/- 3.8 | 0.69 +/- 0.14 | 1.01 +/- 0.28 | https://ommer-lab.com/files/latent-diffusion/kl-f8.zip | as used in SD | | ft-EMA | 560001 | 4.42 | 23.8 +/- 3.9 | 0.69 +/- 0.13 | 0.96 +/- 0.27 | https://huggingface.co/stabilityai/sd-vae-ft-ema-original/resolve/main/vae-ft-ema-560000-ema-pruned.ckpt | slightly better overall, with EMA | | ft-MSE | 840001 | 4.70 | 24.5 +/- 3.7 | 0.71 +/- 0.13 | 0.92 +/- 0.27 | https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt | resumed with EMA from ft-EMA, emphasis on MSE (rec. loss = MSE + 0.1 * LPIPS), smoother outputs | ### LAION-Aesthetics 5+ (256x256, subset, 10000 images) | Model | train steps | rFID | PSNR | SSIM | PSIM | Link | Comments |----------|-----------|------|--------------|---------------|---------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------| | | | | | | | | | | original | 246803 | 2.61 | 26.0 +/- 4.4 | 0.81 +/- 0.12 | 0.75 +/- 0.36 | https://ommer-lab.com/files/latent-diffusion/kl-f8.zip | as used in SD | | ft-EMA | 560001 | 1.77 | 26.7 +/- 4.8 | 0.82 +/- 0.12 | 0.67 +/- 0.34 | https://huggingface.co/stabilityai/sd-vae-ft-ema-original/resolve/main/vae-ft-ema-560000-ema-pruned.ckpt | slightly better overall, with EMA | | ft-MSE | 840001 | 1.88 | 27.3 +/- 4.7 | 0.83 +/- 0.11 | 0.65 +/- 0.34 | https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt | resumed with EMA from ft-EMA, emphasis on MSE (rec. loss = MSE + 0.1 * LPIPS), smoother outputs | ### Visual _Visualization of reconstructions on 256x256 images from the COCO2017 validation dataset._ <p align="center"> <br> <b> 256x256: ft-EMA (left), ft-MSE (middle), original (right)</b> </p> <p align="center"> <img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00025_merged.png /> </p> <p align="center"> <img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00011_merged.png /> </p> <p align="center"> <img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00037_merged.png /> </p> <p align="center"> <img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00043_merged.png /> </p> <p align="center"> <img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00053_merged.png /> </p> <p align="center"> <img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00029_merged.png /> </p>
botp/GhostMix
botp
2023-05-04T09:31:48Z
0
1
null
[ "region:us" ]
null
2023-05-04T09:31:48Z
--- duplicated_from: drnighthan/GhostMix ---
botp/ReVAnimated
botp
2023-05-04T09:23:56Z
0
0
null
[ "license:other", "region:us" ]
null
2023-05-04T09:23:55Z
--- license: other duplicated_from: hanafuusen2001/ReVAnimated --- # 聲明 Disclaimer 本資料夾中的模型不是我所製作,版權歸原作者所有(各模型版權詳見 http://www.civitai.com 所示)。我上傳至本資料夾僅爲方便在綫抽取資源,并非盈利。 The models in this folder are not made by me, and the copyright belongs to the original author (see http://www.civitai.com for details on the copyright of each model). I uploaded to this folder only for the convenience of extracting resources online, not for profit. # 模型列表 List of Models 本資料夾中所有模型詳見下表。 All the models in this folder are detailed in the table below. | 模型名稱 Model Name | Civitai 頁面鏈接 Civitai Page Link | Civitai 下載鏈接 Civitai Download Link | |----------------------|--------------------|--------------------| |revAnimated_v122.safetensors |https://civitai.com/models/7371?modelVersionId=46846 |https://civitai.com/api/download/models/46846 | |revAnimated_v121-inpainting.safetensors |https://civitai.com/models/7371?modelVersionId=43978 |https://civitai.com/api/download/models/43978 | |revAnimated_v121.safetensors |https://civitai.com/models/7371?modelVersionId=40248 |https://civitai.com/api/download/models/40248 | |revAnimated_v11-inpainting.safetensors |https://civitai.com/models/7371?modelVersionId=22258 |https://civitai.com/api/download/models/22258 | |revAnimated_v11.safetensors |https://civitai.com/models/7371?modelVersionId=19575 |https://civitai.com/api/download/models/19575 | |revAnimated_v10-inpainting.safetensors |https://civitai.com/models/7371?modelVersionId=11386 |https://civitai.com/api/download/models/11386 | |revAnimated_v10.safetensors |https://civitai.com/models/7371?modelVersionId=8665 |https://civitai.com/api/download/models/8665 | <img src="" width="512" height="">
s3nh/gpt-j-6b-3500steps-polish
s3nh
2023-05-04T09:21:04Z
0
1
null
[ "pytorch", "pl", "dataset:databricks/databricks-dolly-15k", "dataset:s3nh/alpaca-dolly-instruction-only-polish", "license:openrail", "region:us" ]
null
2023-05-04T07:24:31Z
--- license: openrail datasets: - databricks/databricks-dolly-15k - s3nh/alpaca-dolly-instruction-only-polish language: - pl --- ### Introduction These repository consist of Eleuther-AI/gpt-j-6B finetuned to Polish language on translated alpaca-dolly dataset. Main task is to perform accurate answers to instruction asked. Below you can find an instruction of how to infer with that model. These repository does not contain an tokenizer object, at the moment (#TODO). ### Evaluation part ```python import pandas as pd import torch from torch.utils.data import AutTokenizer from typing import List, Dict, Union from typing import Any, TypeVar import pandas as pd import pickle MODEL_NAME: str = 's3nh/gpt-j-6b-3500steps-polish' tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) model = AutoModelForCasualLM.from_pretrained(MODEL_NAME).cuda() #Resize model for tokenizer size n_tokens: int = len(tokenizer) model.resize_token_embeddings(n_tokens) def _generate_prompt(instruction, input=None): if input: return f"""Poniżej znajduje się instrukcja opisująca zadanie, połączona z danymi wejściowymi, które zapewniają dalszy konktekst. Napisz odpowiedź, która odpowiednio odpowie na pytanie. ### Instruction: {instruction} ### Input: {input} ### Response:""" manual_instruction: str = "Napisz mi proszę jakie są rodzaje telefonów komórkowych" manual_input: str = "Telefony komórkowe, w przeciwieństwie do np. satelitarnych, charakteryzuje to, że działają w obrębie naziemnych fal radiowych w technologii GSM (i w różnych jej wariantach: 3G, 4G czy niebawem 5G). Zasadniczo można jednak wyróżnić wiele ich rodzajów i podzielić je na różne kryteria. I tak, ze względu na rodzaj obudowy, można mówić o telefonach jednobryłowych, rozsuwanych, obrotowych czy też z klapką. Obecnie jednak najbardziej popularne i – ze względu na posiadane parametry – najlepsze telefony komórkowe to smartfony dotykowe." print(f"Valueation for {manual_instruction} \n\n\n {manual_input}\n\n") evaluate(instruction = manual_instruction, input = manual_input) ```
Pietro97/ppo-Huggy
Pietro97
2023-05-04T09:15:03Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-05-04T09:14:55Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: Pietro97/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
botp/Realistic_Vision_V2.0
botp
2023-05-04T09:14:37Z
4
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-04T09:14:36Z
--- license: creativeml-openrail-m duplicated_from: SG161222/Realistic_Vision_V2.0 --- <b>Please read this!</b><br> For version 2.0 it is recommended to use with VAE (to improve generation quality and get rid of blue artifacts): https://huggingface.co/stabilityai/sd-vae-ft-mse-original<br> This model is available on <a href="https://www.mage.space/">Mage.Space</a>, <a href="https://sinkin.ai/">Sinkin.ai</a>, <a href="https://getimg.ai/">GetImg.ai</a> and (<a href="https://randomseed.co/">RandomSeed.co</a> - NSFW content) <hr/> <b>I use this template to get good generation results: Prompt:</b> RAW photo, *subject*, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 <b>Example:</b> RAW photo, a close up portrait photo of 26 y.o woman in wastelander clothes, long haircut, pale skin, slim body, background is city ruins, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 <b>Negative Prompt:</b> (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck<br> <b>OR</b><br> (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation <b>Euler A or DPM++ 2M Karras with 25 steps<br> CFG Scale 3,5 - 7<br> Hires. fix with Latent upscaler<br> 0 Hires steps and Denoising strength 0.25-0.45<br> Upscale by 1.1-2.0</b>
usix79/a2c-PandaReachDense-v2
usix79
2023-05-04T09:07:43Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-04T09:05:05Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.70 +/- 0.62 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
botp/embeddings
botp
2023-05-04T09:00:01Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-04T08:37:38Z
--- license: creativeml-openrail-m duplicated_from: nolanaatama/embeddings --- DISCLAIMER! This Is A Preservation Repository! Cloned since __nolanaatama/embeddings__
botp/zzeipher-fix6
botp
2023-05-04T08:59:19Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-04-27T09:46:43Z
--- license: creativeml-openrail-m duplicated_from: m4gnett/zeipher-f222 --- DISCLAIMER! This Is A Preservation Repository! Cloned since __m4gnett/zeipher-f222__ This repository is for backuping Zeipher F222. I downloaded the model last month via torrent.
brathief/Alice_extend_brathief_e500
brathief
2023-05-04T08:43:17Z
7
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-04-22T13:39:09Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - brathief/Alice_extend_brathief_e500 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
pkufool/icefall_asr_aishell_pruned_transducer_stateless7_bbpe
pkufool
2023-05-04T08:39:07Z
0
0
null
[ "tensorboard", "license:apache-2.0", "region:us" ]
null
2023-05-04T07:25:32Z
--- license: apache-2.0 --- The results: |Vocab size | Greedy search(dev & test) | Modified beam search(dev & test) | Fast beam search (dev & test) | Fast beam search LG (dev & test) | comments| |-- | -- | -- | -- | -- | --| |500 | 4.31 & 4.59 | 4.25 & 4.54 | 4.27 & 4.55 | 4.07 & 4.38 | --epoch 48 --avg 29| The training command: ```bash export CUDA_VISIBLE_DEVICES="4,5,6,7" ./pruned_transducer_stateless7_bbpe/train.py \ --world-size 4 \ --num-epochs 50 \ --start-epoch 1 \ --use-fp16 1 \ --max-duration 800 \ --bpe-model data/lang_bbpe_500/bbpe.model \ --exp-dir pruned_transducer_stateless7_bbpe/exp \ --lr-epochs 6 \ --master-port 12535 ``` The decoding command: ```bash for m in greedy_search modified_beam_search fast_beam_search fast_beam_search_LG; do ./pruned_transducer_stateless7_bbpe/decode.py \ --epoch 48 \ --avg 29 \ --exp-dir ./pruned_transducer_stateless7_bbpe/exp \ --max-sym-per-frame 1 \ --ngram-lm-scale 0.25 \ --ilme-scale 0.2 \ --bpe-model data/lang_bbpe_500/bbpe.model \ --max-duration 2000 \ --decoding-method $m done ```
civitary/msbrew
civitary
2023-05-04T08:38:50Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-04T08:32:55Z
--- license: creativeml-openrail-m ---
GoldfieldGeek/ppo-LL2-bad
GoldfieldGeek
2023-05-04T08:37:19Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-04T07:51:28Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 242.37 +/- 17.44 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
yemiancheng/like-model
yemiancheng
2023-05-04T08:27:52Z
0
0
null
[ "region:us" ]
null
2023-05-04T05:15:02Z
# readme saving some models i like. i will collect them here for using(downloading) easily. ## why - [x] sometimes i want to use it but fogget where to download it. ## life guarantee statement If there is infringement, please temporarily notify me and I will delete it. my email: `ymc-github@gmail.com` or `yemiancheng1993@163.com`
Aleksandar/electra-srb-ner
Aleksandar
2023-05-04T08:14:22Z
117
0
transformers
[ "transformers", "pytorch", "safetensors", "electra", "token-classification", "generated_from_trainer", "dataset:wikiann", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy model_index: - name: electra-srb-ner results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann args: sr metric: name: Accuracy type: accuracy value: 0.9568394937134688 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-srb-ner This model was trained from scratch on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.3406 - Precision: 0.8934 - Recall: 0.9087 - F1: 0.9010 - Accuracy: 0.9568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3686 | 1.0 | 625 | 0.2108 | 0.8326 | 0.8494 | 0.8409 | 0.9335 | | 0.1886 | 2.0 | 1250 | 0.1784 | 0.8737 | 0.8713 | 0.8725 | 0.9456 | | 0.1323 | 3.0 | 1875 | 0.1805 | 0.8654 | 0.8870 | 0.8760 | 0.9468 | | 0.0675 | 4.0 | 2500 | 0.2018 | 0.8736 | 0.8880 | 0.8807 | 0.9502 | | 0.0425 | 5.0 | 3125 | 0.2162 | 0.8818 | 0.8945 | 0.8881 | 0.9512 | | 0.0343 | 6.0 | 3750 | 0.2492 | 0.8790 | 0.8928 | 0.8859 | 0.9513 | | 0.0253 | 7.0 | 4375 | 0.2562 | 0.8821 | 0.9006 | 0.8912 | 0.9525 | | 0.0142 | 8.0 | 5000 | 0.2788 | 0.8807 | 0.9013 | 0.8909 | 0.9524 | | 0.0114 | 9.0 | 5625 | 0.2793 | 0.8861 | 0.9002 | 0.8931 | 0.9534 | | 0.0095 | 10.0 | 6250 | 0.2967 | 0.8887 | 0.9034 | 0.8960 | 0.9550 | | 0.008 | 11.0 | 6875 | 0.2993 | 0.8899 | 0.9067 | 0.8982 | 0.9556 | | 0.0048 | 12.0 | 7500 | 0.3215 | 0.8887 | 0.9038 | 0.8962 | 0.9545 | | 0.0034 | 13.0 | 8125 | 0.3242 | 0.8897 | 0.9068 | 0.8982 | 0.9554 | | 0.003 | 14.0 | 8750 | 0.3311 | 0.8884 | 0.9085 | 0.8983 | 0.9559 | | 0.0025 | 15.0 | 9375 | 0.3383 | 0.8943 | 0.9062 | 0.9002 | 0.9562 | | 0.0011 | 16.0 | 10000 | 0.3346 | 0.8941 | 0.9112 | 0.9026 | 0.9574 | | 0.0015 | 17.0 | 10625 | 0.3362 | 0.8944 | 0.9081 | 0.9012 | 0.9567 | | 0.001 | 18.0 | 11250 | 0.3464 | 0.8877 | 0.9100 | 0.8987 | 0.9559 | | 0.0012 | 19.0 | 11875 | 0.3415 | 0.8944 | 0.9089 | 0.9016 | 0.9568 | | 0.0005 | 20.0 | 12500 | 0.3406 | 0.8934 | 0.9087 | 0.9010 | 0.9568 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0 - Datasets 1.11.0 - Tokenizers 0.10.1
usix79/a2c-AntBulletEnv-v0
usix79
2023-05-04T08:08:30Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-04T08:07:27Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 962.48 +/- 180.68 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
nozmenoz/bella
nozmenoz
2023-05-04T08:06:36Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-04-29T07:37:29Z
--- license: creativeml-openrail-m ---
zohaib99k/Bert_Arabic-SQuADv2-QA
zohaib99k
2023-05-04T07:42:02Z
115
1
transformers
[ "transformers", "pytorch", "electra", "question-answering", "ar", "dataset:ZeyadAhmed/Arabic-SQuADv2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-05-04T07:37:13Z
--- datasets: - ZeyadAhmed/Arabic-SQuADv2.0 language: - ar metrics: - name: exact_match type: exact_match value: 65.12 - name: F1 type: f1 value: 71.49 --- # AraElectra for Question Answering on Arabic-SQuADv2 This is the [AraElectra](https://huggingface.co/aubmindlab/araelectra-base-discriminator) model, fine-tuned using the [Arabic-SQuADv2.0](https://huggingface.co/datasets/ZeyadAhmed/Arabic-SQuADv2.0) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering. with help of [AraElectra Classifier](https://huggingface.co/ZeyadAhmed/AraElectra-Arabic-SQuADv2-CLS) to predicted unanswerable question. ## Overview **Language model:** AraElectra <br> **Language:** Arabic <br> **Downstream-task:** Extractive QA **Training data:** Arabic-SQuADv2.0 **Eval data:** Arabic-SQuADv2.0 <br> **Test data:** Arabic-SQuADv2.0 <br> **Code:** [See More Info on Github](https://github.com/zeyadahmed10/Arabic-MRC) **Infrastructure**: 1x Tesla K80 ## Hyperparameters ``` batch_size = 8 n_epochs = 4 base_LM_model = "AraElectra" learning_rate = 3e-5 optimizer = AdamW padding = dynamic ``` ## Online Demo on Arabic Wikipedia and User Provided Contexts See model in action hosted on streamlit [![Open in Streamlit](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/wissamantoun/arabic-wikipedia-qa-streamlit/main) ## Usage For best results use the AraBert [preprocessor](https://github.com/aub-mind/arabert/blob/master/preprocess.py) by aub-mind ```python from transformers import ElectraForQuestionAnswering, ElectraForSequenceClassification, AutoTokenizer, pipeline from preprocess import ArabertPreprocessor prep_object = ArabertPreprocessor("araelectra-base-discriminator") question = prep_object('ما هي جامعة الدول العربية ؟') context = prep_object(''' جامعة الدول العربية هيمنظمة إقليمية تضم دولاً عربية في آسيا وأفريقيا. ينص ميثاقها على التنسيق بين الدول الأعضاء في الشؤون الاقتصادية، ومن ضمنها العلاقات التجارية الاتصالات، العلاقات الثقافية، الجنسيات ووثائق وأذونات السفر والعلاقات الاجتماعية والصحة. المقر الدائم لجامعة الدول العربية يقع في القاهرة، عاصمة مصر (تونس من 1979 إلى 1990). ''') # a) Get predictions qa_modelname = 'ZeyadAhmed/AraElectra-Arabic-SQuADv2-QA' cls_modelname = 'ZeyadAhmed/AraElectra-Arabic-SQuADv2-CLS' qa_pipe = pipeline('question-answering', model=qa_modelname, tokenizer=qa_modelname) QA_input = { 'question': question, 'context': context } CLS_input = { 'text': question, 'text_pair': context } qa_res = qa_pipe(QA_input) cls_res = cls_pipe(CLS_iput) threshold = 0.5 #hyperparameter can be tweaked ## note classification results label0 probability it can be answered label1 probability can't be answered ## if label1 probability > threshold then consider the output of qa_res is empty string else take the qa_res # b) Load model & tokenizer qa_model = ElectraForQuestionAnswering.from_pretrained(qa_modelname) cls_model = ElectraForSequenceClassification.from_pretrained(cls_modelname) tokenizer = AutoTokenizer.from_pretrained(qa_modelname) ``` ## Performance Evaluated on the Arabic-SQuAD 2.0 test set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/) except changing in the preprocessing a little to fit the arabic language [the modified eval script](https://github.com/zeyadahmed10/Arabic-MRC/blob/main/evaluatev2.py). ``` "exact": 65.11555277951281, "f1": 71.49042547237256,, "total": 9606, "HasAns_exact": 56.14535768645358, "HasAns_f1": 67.79623803036668, "HasAns_total": 5256, "NoAns_exact": 75.95402298850574, "NoAns_f1": 75.95402298850574, "NoAns_total": 4350 ```
StevenLimcorn/unsup-simcse-roberta-large-semeval2015-restaurants
StevenLimcorn
2023-05-04T07:31:28Z
106
0
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-03T13:10:10Z
--- tags: - generated_from_keras_callback model-index: - name: unsup-simcse-roberta-large-semeval2015-restaurants results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # unsup-simcse-roberta-large-semeval2015-restaurants This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
redstonehero/rembremix_v10
redstonehero
2023-05-04T07:19:44Z
29
0
diffusers
[ "diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-04T06:56:24Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image ---
redstonehero/breakdomain_2000
redstonehero
2023-05-04T07:18:40Z
50
1
diffusers
[ "diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-04T06:55:31Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image ---
Theju/switch_low_2
Theju
2023-05-04T07:14:20Z
107
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-04T07:13:27Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: switch_low_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # switch_low_2 This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 25 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Theju/switch_medium_2
Theju
2023-05-04T07:10:38Z
105
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-04T07:09:11Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: switch_medium_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # switch_medium_2 This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 25 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
StevenLimcorn/unsup-simcse-roberta-large-semeval2015-laptops
StevenLimcorn
2023-05-04T07:06:14Z
107
0
transformers
[ "transformers", "pytorch", "tf", "jax", "roberta", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-03T13:00:59Z
--- tags: - generated_from_keras_callback model-index: - name: unsup-simcse-roberta-large-semeval2015-laptops results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # unsup-simcse-roberta-large-semeval2015-laptops This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
Deekay213/floyd
Deekay213
2023-05-04T06:59:36Z
0
0
null
[ "license:deepfloyd-if-license", "region:us" ]
null
2023-05-04T06:59:36Z
--- license: deepfloyd-if-license ---
soumi-maiti/libri23mix_eend_ss
soumi-maiti
2023-05-04T06:49:28Z
4
0
espnet
[ "espnet", "audio", "diarization", "en", "dataset:librimix", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2023-05-04T06:34:06Z
--- tags: - espnet - audio - diarization language: en datasets: - librimix license: cc-by-4.0 --- ## ESPnet2 DIAR model ### `soumi-maiti/libri23mix_eend_ss` This model was trained by soumimaiti using librimix recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout d837c97c88f13ffe655a30bcff93d814f212b225 pip install -e . cd egs2/librimix/enh_diar23 ./run.sh --skip_data_prep false --skip_train true --download_model soumi-maiti/libri23mix_eend_ss ``` ## DIAR config <details><summary>expand</summary> ``` config: conf/tuning/train_diar_enh_convtasnet_concat_feats_adapt.yaml print_config: false log_level: INFO dry_run: false iterator_type: chunk output_dir: exp/diar_enh_train_diar_enh_convtasnet_concat_feats_adapt ngpu: 1 seed: 0 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: 4 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss_enh - min keep_nbest_models: 1 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 16 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: - ../enh_diar1/exp/diar_enh_train_diar_enh_convtasnet_concat_feats_raw/valid.loss_enh.best.pth ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 1 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/diar_enh_stats_8k/train/speech_shape - exp/diar_enh_stats_8k/train/text_shape - exp/diar_enh_stats_8k/train/speech_ref1_shape - exp/diar_enh_stats_8k/train/speech_ref2_shape - exp/diar_enh_stats_8k/train/speech_ref3_shape - exp/diar_enh_stats_8k/train/noise_ref1_shape valid_shape_file: - exp/diar_enh_stats_8k/valid/speech_shape - exp/diar_enh_stats_8k/valid/text_shape - exp/diar_enh_stats_8k/valid/speech_ref1_shape - exp/diar_enh_stats_8k/valid/speech_ref2_shape - exp/diar_enh_stats_8k/valid/speech_ref3_shape - exp/diar_enh_stats_8k/valid/noise_ref1_shape batch_type: folded valid_batch_type: null fold_length: - 800 - 80000 - 80000 - 80000 - 80000 - 80000 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 24000 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train/wav.scp - speech - sound - - dump/raw/train/espnet_rttm - text - rttm - - dump/raw/train/spk1.scp - speech_ref1 - sound - - dump/raw/train/spk2.scp - speech_ref2 - sound - - dump/raw/train/spk3.scp - speech_ref3 - sound - - dump/raw/train/noise1.scp - noise_ref1 - sound valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - sound - - dump/raw/dev/espnet_rttm - text - rttm - - dump/raw/dev/spk1.scp - speech_ref1 - sound - - dump/raw/dev/spk2.scp - speech_ref2 - sound - - dump/raw/dev/spk3.scp - speech_ref3 - sound - - dump/raw/dev/noise1.scp - noise_ref1 - sound allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 eps: 1.0e-07 weight_decay: 0 scheduler: reducelronplateau scheduler_conf: mode: min factor: 0.5 patience: 1 token_list: null src_token_list: null init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true enh_criterions: - name: si_snr conf: eps: 1.0e-07 wrapper: pit wrapper_conf: weight: 1.0 independent_perm: true flexible_numspk: true diar_num_spk: 3 diar_input_size: 128 enh_model_conf: loss_type: si_snr asr_model_conf: ctc_weight: 0.5 interctc_weight: 0.0 ignore_id: -1 lsm_weight: 0.0 length_normalized_loss: false report_cer: true report_wer: true sym_space: <space> sym_blank: <blank> extract_feats_in_collect_stats: true st_model_conf: stft_consistency: false loss_type: mask_mse mask_type: null diar_model_conf: diar_weight: 0.2 attractor_weight: 0.2 subtask_series: - enh - diar model_conf: calc_enh_loss: true bypass_enh_prob: 0 use_preprocessor: true token_type: bpe bpemodel: null src_token_type: bpe src_bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null enh_encoder: conv enh_encoder_conf: channel: 512 kernel_size: 16 stride: 8 enh_separator: tcn_nomask enh_separator_conf: layer: 8 stack: 3 bottleneck_dim: 128 hidden_dim: 512 kernel: 3 causal: false norm_type: gLN enh_decoder: conv enh_decoder_conf: channel: 512 kernel_size: 16 stride: 8 enh_mask_module: multi_mask enh_mask_module_conf: max_num_spk: 3 mask_nonlinear: relu bottleneck_dim: 128 frontend: default frontend_conf: {} specaug: null specaug_conf: {} normalize: utterance_mvn normalize_conf: {} asr_preencoder: null asr_preencoder_conf: {} asr_encoder: rnn asr_encoder_conf: {} asr_postencoder: null asr_postencoder_conf: {} asr_decoder: rnn asr_decoder_conf: {} st_preencoder: null st_preencoder_conf: {} st_encoder: rnn st_encoder_conf: {} st_postencoder: null st_postencoder_conf: {} st_decoder: rnn st_decoder_conf: {} st_extra_asr_decoder: rnn st_extra_asr_decoder_conf: {} st_extra_mt_decoder: rnn st_extra_mt_decoder_conf: {} diar_frontend: default diar_frontend_conf: hop_length: 64 fs: 8000 diar_specaug: null diar_specaug_conf: {} diar_normalize: utterance_mvn diar_normalize_conf: {} diar_encoder: transformer diar_encoder_conf: input_layer: conv2d8 num_blocks: 4 linear_units: 512 dropout_rate: 0.1 output_size: 256 attention_heads: 4 attention_dropout_rate: 0.1 diar_decoder: linear diar_decoder_conf: {} label_aggregator: label_aggregator label_aggregator_conf: win_length: 256 hop_length: 64 diar_attractor: rnn diar_attractor_conf: unit: 256 layer: 1 dropout: 0.0 attractor_grad: true required: - output_dir version: '202205' distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Ar4ikov/wav2vec2_bert_fusion_iemocap_3
Ar4ikov
2023-05-04T06:37:36Z
52
0
transformers
[ "transformers", "pytorch", "tensorboard", "feature-extraction", "generated_from_trainer", "custom_code", "region:us" ]
feature-extraction
2023-05-04T06:21:09Z
--- tags: - generated_from_trainer model-index: - name: wav2vec2_bert_fusion_iemocap_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2_bert_fusion_iemocap_3 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.2
jeremyvictor/mt5-base-gecid23-e3
jeremyvictor
2023-05-04T06:21:31Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-04T04:05:39Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: mt5-base-gecid23-e3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-gecid23-e3 This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2913 - Rouge1: 64.5987 - Rouge2: 58.284 - Rougel: 64.5263 - Rougelsum: 64.5192 - Gen Len: 18.7512 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adafactor - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.5553 | 0.25 | 221 | 0.5815 | 58.7873 | 48.3787 | 58.6622 | 58.6428 | 18.7486 | | 0.6944 | 0.5 | 442 | 0.5010 | 60.225 | 50.8407 | 60.1109 | 60.0966 | 18.7418 | | 0.5891 | 0.75 | 663 | 0.4477 | 61.4891 | 53.2811 | 61.4099 | 61.4089 | 18.7588 | | 0.5145 | 1.0 | 884 | 0.3926 | 62.3704 | 54.3562 | 62.255 | 62.252 | 18.7520 | | 0.3682 | 1.25 | 1105 | 0.3805 | 62.4976 | 54.8233 | 62.4265 | 62.4327 | 18.7622 | | 0.3332 | 1.5 | 1326 | 0.3471 | 63.2736 | 56.0263 | 63.1982 | 63.1901 | 18.7495 | | 0.3097 | 1.75 | 1547 | 0.3173 | 63.5672 | 56.5358 | 63.4813 | 63.4756 | 18.7541 | | 0.2958 | 2.0 | 1768 | 0.3219 | 63.8092 | 57.1715 | 63.7764 | 63.7692 | 18.7512 | | 0.1901 | 2.25 | 1989 | 0.3053 | 64.1292 | 57.5296 | 64.052 | 64.0478 | 18.7533 | | 0.1861 | 2.5 | 2210 | 0.3018 | 64.4658 | 58.0416 | 64.3975 | 64.3918 | 18.7537 | | 0.1696 | 2.75 | 2431 | 0.2928 | 64.5337 | 58.1328 | 64.4735 | 64.4619 | 18.7507 | | 0.1691 | 3.0 | 2652 | 0.2913 | 64.5987 | 58.284 | 64.5263 | 64.5192 | 18.7512 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
adiga20/git-base-pokemon
adiga20
2023-05-04T05:44:11Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-04T05:44:11Z
--- license: creativeml-openrail-m ---
StevenLimcorn/unsup-promcse-bert-base-uncased-semeval2015-restaurants
StevenLimcorn
2023-05-04T05:42:15Z
98
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-03T17:04:57Z
--- tags: - generated_from_keras_callback model-index: - name: semeval-unsup-promcse-bert-base-uncased-semeval2015-restaurants results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # semeval-unsup-promcse-bert-base-uncased-semeval2015-restaurants This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
tawfiq/text_sumurization
tawfiq
2023-05-04T05:41:22Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-04T05:41:22Z
--- license: creativeml-openrail-m ---
StevenLimcorn/unsup-promcse-bert-base-uncased-semeval2015-laptops
StevenLimcorn
2023-05-04T05:41:13Z
94
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-03T17:00:54Z
--- tags: - generated_from_keras_callback model-index: - name: semeval-unsup-promcse-bert-base-uncased-semeval2015-laptops results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # semeval-unsup-promcse-bert-base-uncased-semeval2015-laptops This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
StevenLimcorn/unsup-promcse-bert-base-uncased-facebook-election-ads
StevenLimcorn
2023-05-04T05:40:44Z
97
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-03T17:03:37Z
--- tags: - generated_from_keras_callback model-index: - name: semeval-unsup-promcse-bert-base-uncased-facebook-election-ads results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # semeval-unsup-promcse-bert-base-uncased-facebook-election-ads This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
StevenLimcorn/unsup-promcse-bert-base-uncased-semeval2016-restaurants
StevenLimcorn
2023-05-04T05:34:03Z
92
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-03T17:06:28Z
--- tags: - generated_from_keras_callback model-index: - name: semeval-unsup-promcse-bert-base-uncased-semeval2016-restaurants results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # semeval-unsup-promcse-bert-base-uncased-semeval2016-restaurants This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
StevenLimcorn/unsup-promcse-bert-base-uncased-semeval2014-restaurants
StevenLimcorn
2023-05-04T05:32:50Z
88
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-03T17:02:15Z
--- tags: - generated_from_keras_callback model-index: - name: semeval-unsup-promcse-bert-base-uncased-semeval2014-restaurants results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # semeval-unsup-promcse-bert-base-uncased-semeval2014-restaurants This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
StevenLimcorn/unsup-promcse-bert-base-uncased-semeval2014-laptops
StevenLimcorn
2023-05-04T05:32:00Z
104
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2023-05-03T17:07:54Z
--- tags: - generated_from_keras_callback model-index: - name: semeval-unsup-promcse-bert-base-uncased-semeval2014-laptops results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # semeval-unsup-promcse-bert-base-uncased-semeval2014-laptops This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
imania/amir_take_home_result-2023_05_03-22_33_43
imania
2023-05-04T05:03:42Z
179
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-04T04:52:50Z
--- language: - en library_name: transformers pipeline_tag: text-classification ---
Khh143/Kinkalow
Khh143
2023-05-04T05:00:31Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-04T04:20:24Z
--- license: creativeml-openrail-m ---
P1NHE4D/whisper-medium-nn-v3
P1NHE4D
2023-05-04T04:57:41Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "nn", "dataset:norwegian-parliament", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-03T12:25:27Z
--- language: - nn license: apache-2.0 tags: - generated_from_trainer datasets: - norwegian-parliament metrics: - wer model-index: - name: whisper-medium-nn-v3 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Stortingskorpuset type: norwegian-parliament config: default split: validation args: default metrics: - name: Wer type: wer value: 11.337582785573966 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-nn-v3 This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Stortingskorpuset dataset. It achieves the following results on the evaluation set: - Loss: 0.2116 - Wer: 11.3376 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 8000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.4413 | 0.25 | 2000 | 0.4447 | 26.7707 | | 0.1945 | 1.1 | 4000 | 0.3042 | 17.8344 | | 0.1013 | 1.35 | 6000 | 0.2421 | 14.2138 | | 0.0308 | 2.2 | 8000 | 0.2116 | 11.3376 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.2
akdeniz27/deberta-v2-xlarge-cuad
akdeniz27
2023-05-04T04:52:54Z
121
1
transformers
[ "transformers", "pytorch", "safetensors", "deberta-v2", "question-answering", "en", "dataset:cuad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en datasets: - cuad --- # DeBERTa v2 XLarge Model fine-tuned with CUAD dataset This model is the fine-tuned version of "DeBERTa v2 XLarge" using CUAD dataset https://huggingface.co/datasets/cuad Link for model checkpoint: https://github.com/TheAtticusProject/cuad For the use of the model with CUAD: https://github.com/marshmellow77/cuad-demo and https://huggingface.co/spaces/akdeniz27/contract-understanding-atticus-dataset-demo
chastelove/distilbert-base-uncased_emotion_ft_0504
chastelove
2023-05-04T04:44:17Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-04T04:22:35Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 - precision model-index: - name: distilbert-base-uncased_emotion_ft_0504 results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.935 - name: F1 type: f1 value: 0.9353661273711807 - name: Precision type: precision value: 0.9062644261189533 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_emotion_ft_0504 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1552 - Accuracy: 0.935 - F1: 0.9354 - Precision: 0.9063 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:| | 0.7741 | 1.0 | 250 | 0.2686 | 0.909 | 0.9070 | 0.8911 | | 0.2073 | 2.0 | 500 | 0.1767 | 0.9315 | 0.9319 | 0.9013 | | 0.1397 | 3.0 | 750 | 0.1581 | 0.935 | 0.9353 | 0.9081 | | 0.1123 | 4.0 | 1000 | 0.1552 | 0.935 | 0.9354 | 0.9063 | ### Framework versions - Transformers 4.28.1 - Pytorch 1.13.1 - Datasets 2.12.0 - Tokenizers 0.11.0
douglasfaisal/granularity-legal-reranker-cross-encoder-indobert-base-p2
douglasfaisal
2023-05-04T04:42:49Z
114
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "legal", "id", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-04T04:30:07Z
--- license: mit language: - id tags: - legal ---
shawt100/shawtsanders
shawt100
2023-05-04T04:14:50Z
36
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "dataset:OpenAssistant/oasst1", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-04T03:46:42Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion datasets: - OpenAssistant/oasst1 metrics: - character library_name: diffusers pipeline_tag: text-to-image --- ### shawtsanders Dreambooth model trained by shawt100 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
joseph-t/purrfect-ai-test
joseph-t
2023-05-04T03:44:02Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-04T03:44:02Z
--- license: creativeml-openrail-m ---
muwenxin/autotrain-xgwbishe1-55280129011
muwenxin
2023-05-04T03:36:20Z
110
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain", "summarization", "en", "dataset:muwenxin/autotrain-data-xgwbishe1", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-05-04T03:33:39Z
--- tags: - autotrain - summarization language: - en widget: - text: "I love AutoTrain 🤗" datasets: - muwenxin/autotrain-data-xgwbishe1 co2_eq_emissions: emissions: 1.082559894922486 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 55280129011 - CO2 Emissions (in grams): 1.0826 ## Validation Metrics - Loss: 3.334 - Rouge1: 15.894 - Rouge2: 3.281 - RougeL: 11.775 - RougeLsum: 13.844 - Gen Len: 20.000 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/muwenxin/autotrain-xgwbishe1-55280129011 ```
4bit/oasst-llama13b-4bit-128g
4bit
2023-05-04T03:10:55Z
6
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-04T02:57:06Z
https://wandb.ai/open-assistant/supervised-finetuning/runs/lguuq2c1 Quantized from https://huggingface.co/dvruette/oasst-llama-13b-2-epochs GGML Version: https://huggingface.co/Black-Engineer/oasst-llama13b-ggml-q4
4bit/koala-13B-GPTQ-4bit-128g
4bit
2023-05-04T02:54:46Z
7
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "koala", "ShareGPT", "gptq", "dataset:RyokoAI/ShareGPT52K", "dataset:Hello-SimpleAI/HC3", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-05-04T02:48:14Z
--- license: other library_name: transformers pipeline_tag: text-generation datasets: - RyokoAI/ShareGPT52K - Hello-SimpleAI/HC3 tags: - koala - ShareGPT - llama - gptq inference: false --- # Koala: A Dialogue Model for Academic Research This repo contains the weights of the Koala 13B model produced at Berkeley. It is the result of combining the diffs from https://huggingface.co/young-geng/koala with the original Llama 13B model. This version has then been quantized to 4-bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa). ## My Koala repos I have the following Koala model repositories available: **13B models:** * [Unquantized 13B model in HF format](https://huggingface.co/TheBloke/koala-13B-HF) * [GPTQ quantized 4bit 13B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g) * [GPTQ quantized 4bit 13B model in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g-GGML) **7B models:** * [Unquantized 7B model in HF format](https://huggingface.co/TheBloke/koala-7B-HF) * [Unquantized 7B model in GGML format for llama.cpp](https://huggingface.co/TheBloke/koala-7b-ggml-unquantized) * [GPTQ quantized 4bit 7B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g) * [GPTQ quantized 4bit 7B model in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g-GGML) ## Provided files Three model files are provided. You don't need all three - choose the one that suits your needs best! Details of the files provided: * `koala-13B-4bit-128g.pt` * pt format file, created with the latest [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) code. * Command to create: * `python3 llama.py koala-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save koala-13B-4bit-128g.pt` * `koala-13B-4bit-128g.safetensors` * newer `safetensors` format, with improved file security, created with the latest [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) code. * Command to create: * `python3 llama.py koala-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors koala-13B-4bit-128g.safetensors` * `koala-13B-4bit-128g.no-act-order.ooba.pt` * `pt` format file, created with [oobabooga's older CUDA fork of GPTQ-for-LLaMa](https://github.com/oobabooga/GPTQ-for-LLaMa). * This file is included primarily for Windows users, as it can be used without needing to compile the latest GPTQ-for-LLaMa code. * It should hopefully therefore work with one-click-installers on Windows, which include the older GPTQ-for-LLaMa code. * The older GPTQ code does not support all the latest features, so the quality may be fractionally lower. * Command to create: * `python3 llama.py koala-13B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save koala-13B-4bit-128g.no-act-order.ooba.pt` ## How to run in `text-generation-webui` File `koala-13B-4bit-128g.no-act-order.ooba.pt` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui). The other two model files were created with the latest GPTQ code, and require that the latest GPTQ-for-LLaMa is used inside the UI. Here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI: ``` git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa git clone https://github.com/oobabooga/text-generation-webui mkdir -p text-generation-webui/repositories ln -s GPTQ-for-LLaMa text-generation-webui/repositories/GPTQ-for-LLaMa ``` Then install this model into `text-generation-webui/models` and launch the UI as follows: ``` cd text-generation-webui python server.py --model koala-13B-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want ``` The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information. If you are on Windows, or cannot use the Triton branch of GPTQ for any other reason, you can instead use the CUDA branch: ``` git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda cd GPTQ-for-LLaMa python setup_cuda.py install ``` Then link that into `text-generation-webui/repositories` as described above. Or just use `koala-13B-4bit-128g.no-act-order.ooba.pt` as mentioned above. ## How the Koala delta weights were merged The Koala delta weights were originally merged using the following commands, producing [koala-13B-HF](https://huggingface.co/TheBloke/koala-13B-HF): ``` git clone https://github.com/young-geng/EasyLM git clone https://huggingface.co/TheBloke/llama-13b mkdir koala_diffs && cd koala_diffs && wget https://huggingface.co/young-geng/koala/resolve/main/koala_13b_diff_v2 cd EasyLM PYTHON_PATH="${PWD}:$PYTHONPATH" python \ -m EasyLM.models.llama.convert_torch_to_easylm \ --checkpoint_dir=/content/llama-13b \ --output_file=/content/llama-13b-LM \ --streaming=True PYTHON_PATH="${PWD}:$PYTHONPATH" python \ -m EasyLM.scripts.diff_checkpoint --recover_diff=True \ --load_base_checkpoint='params::/content/llama-13b-LM' \ --load_target_checkpoint='params::/content/koala_diffs/koala_13b_diff_v2' \ --output_file=/content/koala_13b.diff.weights \ --streaming=True PYTHON_PATH="${PWD}:$PYTHONPATH" python \ -m EasyLM.models.llama.convert_easylm_to_hf --model_size=13b \ --output_dir=/content/koala-13B-HF \ --load_checkpoint='params::/content/koala_13b.diff.weights' \ --tokenizer_path=/content/llama-13b/tokenizer.model ``` ## Further info Check out the following links to learn more about the Berkeley Koala model. * [Blog post](https://bair.berkeley.edu/blog/2023/04/03/koala/) * [Online demo](https://koala.lmsys.org/) * [EasyLM: training and serving framework on GitHub](https://github.com/young-geng/EasyLM) * [Documentation for running Koala locally](https://github.com/young-geng/EasyLM/blob/main/docs/koala.md) ## License The model weights are intended for academic research only, subject to the [model License of LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md), [Terms of Use of the data generated by OpenAI](https://openai.com/policies/terms-of-use), and [Privacy Practices of ShareGPT](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb). Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.
Smoden/pinocchio_diff_lora_1500
Smoden
2023-05-04T02:38:17Z
4
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-05-04T00:47:15Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - Smoden/pinocchio_diff_lora_1500 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
platzi/platzi-distilroberta-base-mrpc-glue-cristian-durango
platzi
2023-05-04T01:52:35Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-04T01:33:56Z
--- license: apache-2.0 tags: - text-classification - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: platzi-distilroberta-base-mrpc-glue-cristian-durango results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8259803921568627 - name: F1 type: f1 value: 0.8794567062818336 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-distilroberta-base-mrpc-glue-cristian-durango This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets. It achieves the following results on the evaluation set: - Loss: 0.4245 - Accuracy: 0.8260 - F1: 0.8795 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5318 | 1.09 | 500 | 0.4245 | 0.8260 | 0.8795 | | 0.3704 | 2.18 | 1000 | 0.6045 | 0.8309 | 0.8739 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
saiyoung/cobauli
saiyoung
2023-05-04T01:48:32Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-03T17:56:37Z
--- license: creativeml-openrail-m ---
LottePeisch/RevAnimated-Diffusers
LottePeisch
2023-05-04T01:42:13Z
133
3
diffusers
[ "diffusers", "safetensors", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-04-30T09:56:58Z
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image --- This is a Diffusers version of Rev Animated version 1.2.2 specifically for InvokeAI users. The original checkpoint with all the info on the model can be found here: https://civitai.com/models/7371/rev-animated The differences include: - Changed the scheduler to EulerAncestralDiscreteScheduler, works very well with Euler_A. - Updated the text_encoder config.json file setting num_hidden_layers to 11 instead of the default 12. This is the equivalent of 'Clip Skp 2' in Auto1111 as I understand it and have tested it. Please let me know on the InvokeAI Discord if you encounter issues. - Don't expect it to build the exact same image from the exact same seed as you would in Auto. Invoke and Auto are very different from another, and diffusers are even more different. You should, however, get some awesome images. - I'm sharing this because diffusers are amazing and I think more people should use them. ;) - Comes with the default vae used during the conversion into diffusers format. The original author recommends a few different vaes at the link above, I wanted you to be able to mix and match. The examples below were made without a vae. Here are a few example images: ![Example Image](https://huggingface.co/LottePeisch/RevAnimated-Diffusers/resolve/main/000093.e6ebf3a2.1984434156.postprocessed.png) ![Example Image](https://huggingface.co/LottePeisch/RevAnimated-Diffusers/resolve/main/000696.ebc1735c.unknown_seed.postprocessed.png) ![Example Image](https://huggingface.co/LottePeisch/RevAnimated-Diffusers/resolve/main/000755.533d4f0a.unknown_seed.postprocessed.png)
juan-barsce/my_awesome_eli5_clm-model
juan-barsce
2023-05-04T01:31:51Z
63
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-05-04T01:14:01Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: juan-barsce/my_awesome_eli5_clm-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # juan-barsce/my_awesome_eli5_clm-model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.7254 - Validation Loss: 3.7653 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.9035 | 3.7936 | 0 | | 3.7854 | 3.7763 | 1 | | 3.7254 | 3.7653 | 2 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
ToddGoldfarb/Cadet-Medium
ToddGoldfarb
2023-05-04T01:31:07Z
47
2
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "conversational", "en", "dataset:allenai/soda", "license:openrail", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-03T02:36:53Z
--- license: openrail datasets: - allenai/soda language: - en pipeline_tag: conversational --- # What is Cadet-Medium? Inspired by Allen AI's **Cosmo-XL**, **Cadet-Medium** is a somewhat small conversational model trained off of the **SODA** dataset. **Cadet-Medium** is intended for inference at the edge (on something as small as a 2GB RAM Raspberry Pi). **Cadet-Medium** is trained off of the **t5-base** pretrained model from Google. If you have any questions, or any comments on improvements, please contact me at: **tcgoldfarb@gmail.com** # Google Colab Link Here is the link to the Google Colab file, where I walk through the process of training the model and using the SODA public dataset from AI2. https://colab.research.google.com/drive/1uekZ0gO3GqjPwno16tV1A4Gitrl7p3ur?usp=sharing # Get Started With Cadet-Medium Use the code snippet below to get started with Cadet-Medium! ``` import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import colorful as cf cf.use_true_colors() cf.use_style('monokai') class CadetMedAgent: def __init__(self): print(cf.bold | cf.purple("Waking up Cadet-Medium...")) self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") self.tokenizer = AutoTokenizer.from_pretrained("t5-base", model_max_length=512) self.model = AutoModelForSeq2SeqLM.from_pretrained("ToddGoldfarb/Cadet-Medium", low_cpu_mem_usage=True).to(self.device) self.conversation_history = "" def observe(self, observation): self.conversation_history = self.conversation_history + observation # The number 400 below is just a truncation safety net. It leaves room for 112 input tokens. if len(self.conversation_history) > 400: self.conversation_history = self.conversation_history[112:] def set_input(self, situation_narrative="", role_instruction=""): input_text = "dialog: " if situation_narrative != "": input_text = input_text + situation_narrative if role_instruction != "": input_text = input_text + " <SEP> " + role_instruction input_text = input_text + " <TURN> " + self.conversation_history # Uncomment the line below to see what is fed to the model. # print(input_text) return input_text def generate(self, situation_narrative, role_instruction, user_response): user_response = user_response + " <TURN> " self.observe(user_response) input_text = self.set_input(situation_narrative, role_instruction) inputs = self.tokenizer([input_text], return_tensors="pt").to(self.device) # I encourage you to change the hyperparameters of the model! Start by trying to modify the temperature. outputs = self.model.generate(inputs["input_ids"], max_new_tokens=512, temperature=1, top_p=.95, do_sample=True) cadet_response = self.tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) added_turn = cadet_response + " <TURN> " self.observe(added_turn) return cadet_response def reset_history(self): self.conversation_history = [] def run(self): def get_valid_input(prompt, default): while True: user_input = input(prompt) if user_input in ["Y", "N", "y", "n"]: return user_input if user_input == "": return default while True: continue_chat = "" # MODIFY THESE STRINGS TO YOUR LIKING :) situation_narrative = "Imagine you are Cadet-Medium talking to ???." role_instruction = "You are Cadet-Medium, and you are talking to ???." self.chat(situation_narrative, role_instruction) continue_chat = get_valid_input(cf.purple("Start a new conversation with new setup? [Y/N]:"), "Y") if continue_chat in ["N", "n"]: break print(cf.blue("CM: See you!")) def chat(self, situation_narrative, role_instruction): print(cf.green( "Cadet-Medium is running! Input [RESET] to reset the conversation history and [END] to end the conversation.")) while True: user_input = input("You: ") if user_input == "[RESET]": self.reset_history() print(cf.green("[Conversation history cleared. Chat with Cadet-Medium!]")) continue if user_input == "[END]": break response = self.generate(situation_narrative, role_instruction, user_input) print(cf.blue("CM: " + response)) def main(): print(cf.bold | cf.blue("LOADING MODEL")) CadetMed = CadetMedAgent() CadetMed.run() if __name__ == '__main__': main() ``` # Citations and Special Thanks Special thanks to Hyunwoo Kim for discussing with me the best way to use the SODA dataset. If you haven't looked into their work with SODA, Prosocial-Dialog, or COSMO, I recommend you do so! As well, read the paper on SODA! The article is listed below. ``` @article{kim2022soda, title={SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization}, author={Hyunwoo Kim and Jack Hessel and Liwei Jiang and Peter West and Ximing Lu and Youngjae Yu and Pei Zhou and Ronan Le Bras and Malihe Alikhani and Gunhee Kim and Maarten Sap and Yejin Choi}, journal={ArXiv}, year={2022}, volume={abs/2212.10465} } ```
rcugarte/genfonts
rcugarte
2023-05-04T01:28:39Z
0
0
null
[ "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dataset:rcugarte/genfonts_data", "region:us" ]
text-to-image
2023-05-04T01:19:53Z
--- datasets: - rcugarte/genfonts_data tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image ---
junelee/wizard-vicuna-13b
junelee
2023-05-04T01:23:39Z
2,682
77
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-03T20:46:24Z
https://github.com/melodysdreamj/WizardVicunaLM
ZyXin/ppo-Pyramids_Training
ZyXin
2023-05-04T01:14:39Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-05-04T01:14:34Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: ZyXin/ppo-Pyramids_Training 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
msr10/en_esg_ner
msr10
2023-05-04T01:07:29Z
3
0
spacy
[ "spacy", "token-classification", "en", "model-index", "region:us" ]
token-classification
2023-05-04T01:06:49Z
--- tags: - spacy - token-classification language: - en model-index: - name: en_esg_ner results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9211469534 - name: NER Recall type: recall value: 0.9191702432 - name: NER F Score type: f_score value: 0.9201575367 --- | Feature | Description | | --- | --- | | **Name** | `en_esg_ner` | | **Version** | `0.0.0` | | **spaCy** | `>=3.5.2,<3.6.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (3 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `Environmental`, `Governance`, `Social` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 92.02 | | `ENTS_P` | 92.11 | | `ENTS_R` | 91.92 | | `TRANSFORMER_LOSS` | 14719.85 | | `NER_LOSS` | 10789.72 |
1008611sS/111
1008611sS
2023-05-04T01:06:04Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2023-05-04T01:06:04Z
--- license: bigscience-bloom-rail-1.0 ---
DurangoFon/vit_model
DurangoFon
2023-05-04T00:55:55Z
216
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:beans", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-04T00:07:22Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - beans metrics: - accuracy model-index: - name: vit_model results: - task: name: Image Classification type: image-classification dataset: name: beans type: beans config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.9924812030075187 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0189 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1345 | 3.85 | 500 | 0.0189 | 0.9925 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Multi-Domain-Expert-Learning/expert-pubmed_central
Multi-Domain-Expert-Learning
2023-05-04T00:43:15Z
150
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "generated_from_trainer", "dataset:Multi-Domain-Expert-Layers/pubmed_central", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-03T19:14:52Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - Multi-Domain-Expert-Layers/pubmed_central metrics: - accuracy model-index: - name: layer_9,10,11,12,13 results: - task: type: text-generation name: Causal Language Modeling dataset: name: Multi-Domain-Expert-Layers/pubmed_central type: Multi-Domain-Expert-Layers/pubmed_central split: None metrics: - type: accuracy value: 0.5767534246575342 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layer_9,10,11,12,13 This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on the Multi-Domain-Expert-Layers/pubmed_central dataset. It achieves the following results on the evaluation set: - Loss: 2.0227 - Accuracy: 0.5768 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0567 | 0.0 | 200 | 2.0533 | 0.5717 | | 2.041 | 0.01 | 400 | 2.0438 | 0.5733 | | 2.0496 | 0.01 | 600 | 2.0361 | 0.5749 | | 2.0194 | 0.02 | 800 | 2.0276 | 0.5761 | | 2.0338 | 0.02 | 1000 | 2.0227 | 0.5768 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3 ## Wandb Report https://wandb.ai/ontocord/pythia-1b-deduped-layer-test-pubmed_central/runs/yy3pwx0o
ZyXin/ppo-SnowballTarget
ZyXin
2023-05-04T00:32:00Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-05-04T00:31:54Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Find your model_id: ZyXin/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
kreepy/poca-SoccerTwos
kreepy
2023-05-04T00:18:40Z
29
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-05-03T20:55:50Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: kreepy/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
EinsZwo/en-to-de_coref_words_moreEpochs
EinsZwo
2023-05-03T23:21:37Z
62
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-03T22:13:31Z
--- license: cc-by-4.0 tags: - generated_from_keras_callback model-index: - name: EinsZwo/en-to-de_coref_words_moreEpochs results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # EinsZwo/en-to-de_coref_words_moreEpochs This model is a fine-tuned version of [EinsZwo/en-to-de_coref_words](https://huggingface.co/EinsZwo/en-to-de_coref_words) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7690 - Validation Loss: 1.4251 - Epoch: 6 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 40649, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.2116 | 1.3502 | 0 | | 1.0925 | 1.3576 | 1 | | 0.9982 | 1.3748 | 2 | | 0.9210 | 1.3966 | 3 | | 0.8575 | 1.4064 | 4 | | 0.8064 | 1.4174 | 5 | | 0.7690 | 1.4251 | 6 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
sqllama/lora-spider-dono
sqllama
2023-05-03T22:36:46Z
0
0
null
[ "region:us" ]
null
2023-04-30T01:00:50Z
## Setup Notes For this model, a VM with 2 T4 GPUs was used. Note 1. Output directory was initially lora-alpaca and then contents were moved to new folder when initializing git repository. ## Log (sqltest) chrisdono@deep-learning-duo-t4-3:~/alpaca-lora$ WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py --base_model 'decapoda-research/llama-7b-hf' --data_path 'spider' --output_dir './lora-alpaca' --num_epochs 10 --batch_size 32 --micro_batch_size 16 --learning_rate '9e-5' --add_eos_token Adding last loss values not included in trainer json file from last checkpoint. {'loss': 0.241, 'learning_rate': 1.0040816326530613e-05, 'epoch': 8.98} {'loss': 0.2343, 'learning_rate': 9.42857142857143e-06, 'epoch': 9.04} {'loss': 0.2376, 'learning_rate': 8.816326530612245e-06, 'epoch': 9.11} {'loss': 0.2355, 'learning_rate': 8.204081632653062e-06, 'epoch': 9.17} {'loss': 0.229, 'learning_rate': 7.591836734693877e-06, 'epoch': 9.24} {'loss': 0.2325, 'learning_rate': 6.979591836734694e-06, 'epoch': 9.3} {'loss': 0.24, 'learning_rate': 6.367346938775511e-06, 'epoch': 9.36} {'loss': 0.2438, 'learning_rate': 5.755102040816327e-06, 'epoch': 9.43} {'loss': 0.2391, 'learning_rate': 5.142857142857143e-06, 'epoch': 9.49} {'loss': 0.2351, 'learning_rate': 4.530612244897959e-06, 'epoch': 9.55} {'loss': 0.2289, 'learning_rate': 3.9183673469387755e-06, 'epoch': 9.62} {'loss': 0.2294, 'learning_rate': 3.3061224489795924e-06, 'epoch': 9.68} {'loss': 0.2344, 'learning_rate': 2.693877551020408e-06, 'epoch': 9.75} {'loss': 0.2358, 'learning_rate': 2.0816326530612247e-06, 'epoch': 9.81} {'loss': 0.2365, 'learning_rate': 1.469387755102041e-06, 'epoch': 9.87} {'loss': 0.2309, 'learning_rate': 8.571428571428572e-07, 'epoch': 9.94} {'loss': 0.2438, 'learning_rate': 2.4489795918367347e-07, 'epoch': 10.0} 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1570 {'train_runtime': 17144.6766, 'train_samples_per_second': 2.916, 'train_steps_per_second': 0.092, 'train_loss': 0.41175747267000234, 'epoch': 10.0} 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1570 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1570 /1570 [4:45:44<00:00, 10.92s/it]
andli28/rl_course_vizdoom_health_gathering_supreme
andli28
2023-05-03T22:03:33Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-03T21:15:26Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 11.46 +/- 4.74 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r andli28/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
jajsmith/dsn_afrispeech
jajsmith
2023-05-03T21:54:22Z
75
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "en", "dataset:tobiolatunji/afrispeech-200", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-03T17:17:19Z
--- language: - en license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - tobiolatunji/afrispeech-200 model-index: - name: Whisper Small En - Owos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small En - Owos This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the AfriSpeech_j dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6865 - eval_wer: 29.3845 - eval_runtime: 1774.5798 - eval_samples_per_second: 1.691 - eval_steps_per_second: 0.211 - epoch: 0.06 - step: 250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.27.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
BrianPistar/focplanet
BrianPistar
2023-05-03T21:40:55Z
5
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-03T21:34:27Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### focplanet Dreambooth model trained by BrianPistar with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook Build your own using the [AI Avatar project](https://buildspace.so/builds/ai-avatar)! To get started head over to the [project dashboard](https://buildspace.so/p/build-ai-avatars). Sample pictures of this concept:
vldnechai/poca-SoccerTwos
vldnechai
2023-05-03T21:39:07Z
36
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-05-03T21:37:51Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: vldnechai/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
jploski/llama-7b-hf
jploski
2023-05-03T21:32:32Z
6
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-03T21:23:14Z
--- license: other --- Note: this is yahma/llama-7b-hf with checkpoint shards split into smaller files in order to enable loading in restricted memory environments like free Google Colab. The remaining description below is copied from yahma/llama-7b-hf. LLaMA-7B converted to work with git head Transformers/HuggingFace on April 8, 2023. This version should resolve the EOS token issues. This is under a special license, please see the LICENSE file for details. This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file). You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format. -- license: other --- # LLaMA Model Card ## Model details **Organization developing the model** The FAIR team of Meta AI. **Model date** LLaMA was trained between December. 2022 and Feb. 2023. **Model version** This is version 1 of the model. **Model type** LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters. **Paper or resources for more information** More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/. **Citations details** https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/ **License** Non-commercial bespoke license **Where to send questions or comments about the model** Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue. ## Intended use **Primary intended uses** The primary use of LLaMA is research on large language models, including: exploring potential applications such as question answering, natural language understanding or reading comprehension, understanding capabilities and limitations of current language models, and developing techniques to improve those, evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations. **Primary intended users** The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence. **Out-of-scope use cases** LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers. ## Factors **Relevant factors** One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model. **Evaluation factors** As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model. ## Metrics **Model performance measures** We use the following measure to evaluate the model: - Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs, - Exact match for question answering, - The toxicity score from Perspective API on RealToxicityPrompts. **Decision thresholds** Not applicable. **Approaches to uncertainty and variability** Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training. ## Evaluation datasets The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs. ## Training dataset The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing. ## Quantitative analysis Hyperparameters for the model architecture <table> <thead> <tr> <th >LLaMA</th> <th colspan=6>Model hyper parameters </th> </tr> <tr> <th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th> </tr> </thead> <tbody> <tr> <th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T </tr> <tr> <th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T </tr> <tr> <th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T </tr> <tr> <th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T </tr> </tbody> </table> *Table 1 - Summary of LLama Model Hyperparameters* We present our results on eight standard common sense reasoning benchmarks in the table below. <table> <thead> <tr> <th>LLaMA</th> <th colspan=9>Reasoning tasks </th> </tr> <tr> <th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th> </tr> </thead> <tbody> <tr> <th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93 </th> <tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94 </th> <tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92 </th> <tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr> </tbody> </table> *Table 2 - Summary of LLama Model Performance on Reasoning tasks* We present our results on bias in the table below. Note that lower value is better indicating lower bias. | No | Category | FAIR LLM | | --- | -------------------- | -------- | | 1 | Gender | 70.6 | | 2 | Religion | 79 | | 3 | Race/Color | 57 | | 4 | Sexual orientation | 81 | | 5 | Age | 70.1 | | 6 | Nationality | 64.2 | | 7 | Disability | 66.7 | | 8 | Physical appearance | 77.8 | | 9 | Socioeconomic status | 71.5 | | | LLaMA Average | 66.6 | *Table 3 - Summary bias of our model output* ## Ethical considerations **Data** The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data. **Human life** The model is not intended to inform decisions about matters central to human life, and should not be used in such a way. **Mitigations** We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier. **Risks and harms** Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard. **Use cases** LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
tooucci/CartPole
tooucci
2023-05-03T21:22:55Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-04-21T23:38:15Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
skierdude/SkiingTest
skierdude
2023-05-03T21:06:37Z
194
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-03T21:06:31Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: SkiingTest results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.7666666507720947 --- # SkiingTest Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Big Mountain ![Big Mountain](images/Big_Mountain.jpg) #### Freeride World Tour ![Freeride World Tour](images/Freeride_World_Tour.jpg) #### Freestyle ![Freestyle](images/Freestyle.jpg) #### Skiing ![Skiing](images/Skiing.jpg)
Sjdan/sw_high_hp1_2
Sjdan
2023-05-03T21:06:35Z
107
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-03T19:28:09Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: sw_high_hp1_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sw_high_hp1_2 This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 25 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
AliiaR/t5-small-finetuned-model
AliiaR
2023-05-03T21:01:54Z
63
0
transformers
[ "transformers", "tf", "tensorboard", "t5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-02T20:28:10Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: AliiaR/t5-small-finetuned-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # AliiaR/t5-small-finetuned-model This model is a fine-tuned version of [AliiaR/t5-small-finetuned-model](https://huggingface.co/AliiaR/t5-small-finetuned-model) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.4127 - Validation Loss: 1.1016 - Train Rouge1: 14.9189 - Train Rouge2: 3.7554 - Train Rougel: 13.6461 - Train Rougelsum: 13.6801 - Train Gen Len: 13.4191 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch | |:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:| | 1.4127 | 1.1016 | 14.9189 | 3.7554 | 13.6461 | 13.6801 | 13.4191 | 0 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
pandma/es_pipeline
pandma
2023-05-03T20:54:53Z
4
0
spacy
[ "spacy", "token-classification", "es", "model-index", "region:us" ]
token-classification
2023-05-03T20:54:28Z
--- tags: - spacy - token-classification language: - es model-index: - name: es_pipeline results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.998766394 - name: NER Recall type: recall value: 0.9988961039 - name: NER F Score type: f_score value: 0.9988312447 --- | Feature | Description | | --- | --- | | **Name** | `es_pipeline` | | **Version** | `0.0.0` | | **spaCy** | `>=3.5.2,<3.6.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (13 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `BILLING_PERIOD_END`, `BILLING_PERIOD_START`, `BILL_OWNER`, `COMPANY_NAME`, `CUPS`, `DIRECTION`, `ENERGY_P1_PRICE`, `ENERGY_P2_PRICE`, `ENERGY_P3_PRICE`, `NIF`, `POWER_P1_PRICE`, `POWER_P2_PRICE`, `TOTAL_IMPORTE` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 99.88 | | `ENTS_P` | 99.88 | | `ENTS_R` | 99.89 | | `TRANSFORMER_LOSS` | 6425.46 | | `NER_LOSS` | 41888.91 |
ThanHitt/FishTreeRock_Classifier_v1
ThanHitt
2023-05-03T20:37:34Z
241
1
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-03T20:37:27Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: FishTreeRock_Classifier_v1 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9850746393203735 --- # FishTreeRock_Classifier_v1 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### fish ![fish](images/fish.jpg) #### rock ![rock](images/rock.jpg) #### tree ![tree](images/tree.jpg)
ashiyakatuka11/es_finetuned_T5
ashiyakatuka11
2023-05-03T20:34:26Z
61
0
transformers
[ "transformers", "tf", "t5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-03T20:33:45Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: es_finetuned_T5 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # es_finetuned_T5 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3287 - Train Accuracy: 0.9604 - Validation Loss: 0.3338 - Validation Accuracy: 0.9604 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3418 | 0.9587 | 0.3421 | 0.9595 | 0 | | 0.3287 | 0.9604 | 0.3338 | 0.9604 | 1 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.11.0 - Datasets 2.12.0 - Tokenizers 0.13.3
nolanaatama/sspwtrsrn
nolanaatama
2023-05-03T20:23:57Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-03T20:21:31Z
--- license: creativeml-openrail-m ---
aboMesalam/my_awesome_swag_model
aboMesalam
2023-05-03T20:20:46Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "multiple-choice", "generated_from_trainer", "dataset:swag", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2023-05-03T20:19:39Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - swag model-index: - name: my_awesome_swag_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_swag_model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
nergaldarski/galenaREDUX
nergaldarski
2023-05-03T20:19:15Z
0
2
null
[ "region:us" ]
null
2023-05-03T16:26:38Z
CivitAI: https://civitai.com/models/53360/galena-redux
ameerazam08/autotrain-docker-check-1-55215128879
ameerazam08
2023-05-03T20:10:06Z
217
1
transformers
[ "transformers", "pytorch", "swin", "image-classification", "autotrain", "dataset:ameerazam08/autotrain-data-docker-check-1", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-03T20:09:45Z
--- tags: - autotrain - image-classification widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace datasets: - ameerazam08/autotrain-data-docker-check-1 co2_eq_emissions: emissions: 0 --- # Model Trained Using AutoTrain - Problem type: Image Classification - CO2 Emissions (in grams): 0.0000 ## Validation Metricsg loss: 0.725390613079071 f1: 0.6666666666666666 precision: 0.5 recall: 1.0 auc: 0.8 accuracy: 0.5
Readty/Larasbali
Readty
2023-05-03T19:59:39Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-03T19:58:57Z
--- license: creativeml-openrail-m ---
danbrown/testman-lora-5
danbrown
2023-05-03T19:53:34Z
4
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:danbrown/AnyLora-v1", "base_model:adapter:danbrown/AnyLora-v1", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-05-03T19:52:54Z
--- license: creativeml-openrail-m base_model: danbrown/AnyLora-v1 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - danbrown/testman-lora-3 These are LoRA adaption weights for danbrown/AnyLora-v1. The weights were fine-tuned on the danbrown/testman-dataset dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png)
nergaldarski/mistoonAnime
nergaldarski
2023-05-03T19:53:18Z
0
5
null
[ "region:us" ]
null
2023-05-03T19:41:13Z
CivitAI: https://civitai.com/models/24149/mistoonanime
Ibrahim-Alam/finetuning-distilbert-base-uncased-on-imdb
Ibrahim-Alam
2023-05-03T19:49:50Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-03T19:43:27Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-distilbert-base-uncased-on-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.96 - name: F1 type: f1 value: 0.9596231493943473 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-distilbert-base-uncased-on-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.1311 - Accuracy: 0.96 - F1: 0.9596 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Multi-Domain-Expert-Learning/expert-pubmed_abstracts
Multi-Domain-Expert-Learning
2023-05-03T19:48:41Z
6
1
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-03T13:01:42Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: expert-pubmed_abstracts results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # expert-pubmed_abstracts This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2407 - Accuracy: 0.5368 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.2802 | 0.01 | 500 | 2.2553 | 0.5345 | | 2.2277 | 0.02 | 1000 | 2.2407 | 0.5368 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
Readty/Rsmyntv1
Readty
2023-05-03T19:45:22Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-03T19:43:16Z
--- license: creativeml-openrail-m ---