modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-11 00:42:47
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
553 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-11 00:42:38
card
stringlengths
11
1.01M
Srikanthr2/whisper-medium-sanskasr-37000-V1
Srikanthr2
2023-07-06T12:10:45Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "sa", "dataset:addy88/sanskrit-asr-84", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-06-21T06:37:43Z
--- language: - sa license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - addy88/sanskrit-asr-84 model-index: - name: whisper-medium-sanskasr-37000-V1-upload results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-sanskasr-37000-V1-upload This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the addy88/sanskrit-asr-84 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.13.1 - Tokenizers 0.13.3
anujsahani01/finetuned_AI4Bharat_en_mr
anujsahani01
2023-07-06T11:55:30Z
108
0
transformers
[ "transformers", "pytorch", "safetensors", "mbart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-06T01:54:41Z
--- license: mit tags: - generated_from_trainer model-index: - name: finetuned_AI4Bharat_en_mr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_AI4Bharat_en_mr This model is a fine-tuned version of [ai4bharat/indic-bert](https://huggingface.co/ai4bharat/indic-bert) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 8000 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
HasinMDG/UVC-Deberta-baseline
HasinMDG
2023-07-06T11:55:28Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "deberta-v2", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-06T11:55:11Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # HasinMDG/UVC-Deberta-baseline This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("HasinMDG/UVC-Deberta-baseline") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
tom192180/distilbert-base-uncased-finetuned-squad
tom192180
2023-07-06T11:49:51Z
118
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-06T09:37:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 3.2458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 57 | 3.5390 | | No log | 2.0 | 114 | 3.2458 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Binaryy/blender-bot-distill-finetuned
Binaryy
2023-07-06T11:36:26Z
109
0
transformers
[ "transformers", "pytorch", "safetensors", "blenderbot", "text2text-generation", "code", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-05-17T16:39:36Z
--- license: apache-2.0 language: - en pipeline_tag: conversational tags: - code ---
papahawk/gpt2-1.5b
papahawk
2023-07-06T11:19:11Z
206
0
transformers
[ "transformers", "pytorch", "tf", "jax", "tflite", "rust", "onnx", "safetensors", "gpt2", "text-generation", "pyTtorch", "tensorflow", "en", "dataset:gpt-2-output-dataset", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-05T22:17:24Z
--- language: - en tags: - text-generation - pyTtorch - tensorflow - transformers datasets: - gpt-2-output-dataset license: mit --- <h1 style='text-align: center '>GPT2-1.5b LLM</h1> <h2 style='text-align: center '><em>Fork of OpenAI/GPT2-1.5b</em> </h2> <h3 style='text-align: center '>Model Card</h3> <img src="https://alt-web.xyz/images/rainbow.png" alt="Rainbow Solutions" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # gpt2-1.5b Code and models from the paper ["Language Models are Unsupervised Multitask Learners"](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf). You can read about GPT-2 and its staged release in our [original blog post](https://blog.openai.com/better-language-models/), [6 month follow-up post](https://openai.com/blog/gpt-2-6-month-follow-up/), and [final post](https://www.openai.com/blog/gpt-2-1-5b-release/). We have also [released a dataset](https://github.com/openai/gpt-2-output-dataset) for researchers to study their behaviors. <sup>*</sup> *Note that our original parameter counts were wrong due to an error (in our previous blog posts and paper). Thus you may have seen small referred to as 117M and medium referred to as 345M.* ## Usage This repository is meant to be a starting point for researchers and engineers to experiment with GPT-2. For basic information, see our [model card](./model_card.md). ### Some caveats - GPT-2 models' robustness and worst case behaviors are not well-understood. As with any machine-learned model, carefully evaluate GPT-2 for your use case, especially if used without fine-tuning or in safety-critical applications where reliability is important. - The dataset our GPT-2 models were trained on contains many texts with [biases](https://twitter.com/TomerUllman/status/1101485289720242177) and factual inaccuracies, and thus GPT-2 models are likely to be biased and inaccurate as well. - To avoid having samples mistaken as human-written, we recommend clearly labeling samples as synthetic before wide dissemination. Our models are often incoherent or inaccurate in subtle ways, which takes more than a quick read for a human to notice. ### Work with us Please [let us know](mailto:languagequestions@openai.com) if you’re doing interesting research with or working on applications of GPT-2! We’re especially interested in hearing from and potentially working with those who are studying - Potential malicious use cases and defenses against them (e.g. the detectability of synthetic text) - The extent of problematic content (e.g. bias) being baked into the models and effective mitigations ## Development See [DEVELOPERS.md](./DEVELOPERS.md) ## Contributors See [CONTRIBUTORS.md](./CONTRIBUTORS.md) ## Citation Please use the following bibtex entry: ``` @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } ``` ## Future work We may release code for evaluating the models on various benchmarks. We are still considering release of the larger models. ## License [Modified MIT](./LICENSE)
rohanbalkondekar/QnA-with-context
rohanbalkondekar
2023-07-06T11:13:08Z
7
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-06T11:06:40Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.30.1 pip install accelerate==0.20.3 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="BeRohan/QnA-with-context", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?</s><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "BeRohan/QnA-with-context", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "BeRohan/QnA-with-context", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "BeRohan/QnA-with-context" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?</s><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(32000, 4096, padding_idx=0) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=4096, bias=False) (v_proj): Linear(in_features=4096, out_features=4096, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=4096, out_features=11008, bias=False) (down_proj): Linear(in_features=11008, out_features=4096, bias=False) (up_proj): Linear(in_features=4096, out_features=11008, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=4096, out_features=32000, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Model Validation Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). ```bash CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=BeRohan/QnA-with-context --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log ``` ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
jayavibhav/t5-small-finetuned-xsum
jayavibhav
2023-07-06T11:06:12Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-06T07:13:57Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum config: default split: validation args: default metrics: - name: Rouge1 type: rouge value: 28.2871 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4784 - Rouge1: 28.2871 - Rouge2: 7.7216 - Rougel: 22.2416 - Rougelsum: 22.237 - Gen Len: 18.8267 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.7171 | 1.0 | 12753 | 2.4784 | 28.2871 | 7.7216 | 22.2416 | 22.237 | 18.8267 | ### Framework versions - Transformers 4.30.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
irfan62622/Taxi-v3
irfan62622
2023-07-06T11:01:51Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T11:01:11Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.44 +/- 2.74 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="irfan62622/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Zain6699/intent-classifier-call_to_action
Zain6699
2023-07-06T11:00:48Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T10:59:26Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: intent-classifier-call_to_action results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # intent-classifier-call_to_action This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0810 - Accuracy: 0.9875 - F1: 0.9639 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
cerindam30/tugas_akhir
cerindam30
2023-07-06T10:56:16Z
30
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-02T08:20:21Z
--- license: mit tags: - generated_from_trainer model-index: - name: tugas_akhir results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tugas_akhir This model is a fine-tuned version of [indobenchmark/indobart-v2](https://huggingface.co/indobenchmark/indobart-v2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 - label_smoothing_factor: 0.1 ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Zain6699/intent-classifier-flattery
Zain6699
2023-07-06T10:56:16Z
120
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T10:54:49Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: intent-classifier-flattery results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # intent-classifier-flattery This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0434 - Accuracy: 0.9917 - F1: 0.9747 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
nikolamilosevic/distil_bert_uncased-finetuned-relations
nikolamilosevic
2023-07-06T10:55:05Z
152
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-14T11:08:49Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - recall - f1 model-index: - name: distil_bert_uncased-finetuned-relations results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distil_bert_uncased-finetuned-relations This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4191 - Accuracy: 0.8866 - Prec: 0.8771 - Recall: 0.8866 - F1: 0.8808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Prec | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:------:| | 1.1823 | 1.0 | 232 | 0.5940 | 0.8413 | 0.8273 | 0.8413 | 0.8224 | | 0.4591 | 2.0 | 464 | 0.4600 | 0.8607 | 0.8539 | 0.8607 | 0.8555 | | 0.3106 | 3.0 | 696 | 0.4160 | 0.8812 | 0.8763 | 0.8812 | 0.8785 | | 0.246 | 4.0 | 928 | 0.4113 | 0.8834 | 0.8766 | 0.8834 | 0.8796 | | 0.2013 | 5.0 | 1160 | 0.4191 | 0.8866 | 0.8771 | 0.8866 | 0.8808 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.13.0.dev20220614 - Datasets 2.2.2 - Tokenizers 0.11.6
linlinlin/peft-fine-tuning
linlinlin
2023-07-06T10:54:57Z
0
0
null
[ "pytorch", "tensorboard", "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
2023-07-06T10:31:06Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: peft-fine-tuning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # peft-fine-tuning This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 50 ### Training results ### Framework versions - Transformers 4.27.2 - Pytorch 2.0.1+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
linlinlin/full-fine-tuning
linlinlin
2023-07-06T10:53:14Z
180
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-06T10:22:57Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: full-fine-tuning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # full-fine-tuning This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 50 ### Training results ### Framework versions - Transformers 4.27.2 - Pytorch 2.0.1+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
mpetrikov/dqn-unit3-SpaceInvadersNoFrameskip-v4
mpetrikov
2023-07-06T10:45:49Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T10:45:14Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 568.00 +/- 121.35 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mpetrikov -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mpetrikov -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mpetrikov ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Hardi13/Revier
Hardi13
2023-07-06T10:44:53Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-07-06T10:44:53Z
--- license: bigscience-openrail-m ---
isaachong127/gpt2_chinese_with_personal_qqchat_data
isaachong127
2023-07-06T10:43:00Z
131
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "zh", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-05T20:01:00Z
--- license: apache-2.0 language: - zh library_name: transformers --- # intro 1. 1.38G的中文私人QQ群聊天记录语料 2. 1400万个tokens 3. 一张3060显卡训练17小时 个人首次尝试训练人工智能模型,学习训练GPT2模型,仅供参考。 交互结果仅供参考,本模型不对结果的合法性和合理性做保证, # Link [从头开始训练因果语言模型](https://huggingface.co/course/zh-CN/chapter7/6?fw=pt) # infer code ```python from transformers import GPT2LMHeadModel, AutoTokenizer model_name_or_path = "isaachong127/gpt2_chinese_with_personal_qqchat_data"#"checkpoint-16000" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) # add the EOS token as PAD token to avoid warnings model = GPT2LMHeadModel.from_pretrained(model_name_or_path, pad_token_id=tokenizer.eos_token_id) ``` ```python txt = """\ 今天 """ # encode context the generation is conditioned on input_ids = tokenizer.encode(txt, return_tensors='pt') # set no_repeat_ngram_size to 2 beam_output = model.generate( input_ids, max_length=100, num_beams=5, no_repeat_ngram_size=2, early_stopping=True ) print("Output:\n" + 50 * '-') print(tokenizer.decode(beam_output[0], skip_special_tokens=True)) ``` ```bash Output: ---------------------------------------------------------------------------------------------------- 今天 已 经 是 你 的 第 667 次 签 到 啦 ~ 纱 雾 酱 对 乃 的 好 感 度 [ + 10 ] 2021 年 , 要 加 油 哦 ~ ','签 到 ','@ \ u202e ```
bofenghuang/asr-wav2vec2-ctc-french
bofenghuang
2023-07-06T10:34:26Z
429
12
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event", "CTC", "Wav2vec2", "fr", "dataset:common_voice", "dataset:mozilla-foundation/common_voice_11_0", "dataset:facebook/multilingual_librispeech", "dataset:facebook/voxpopuli", "dataset:gigant/african_accented_french", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-25T15:33:14Z
--- license: apache-2.0 language: fr library_name: transformers thumbnail: null tags: - automatic-speech-recognition - hf-asr-leaderboard - robust-speech-event - CTC - Wav2vec2 datasets: - common_voice - mozilla-foundation/common_voice_11_0 - facebook/multilingual_librispeech - facebook/voxpopuli - gigant/african_accented_french metrics: - wer model-index: - name: Fine-tuned wav2vec2-FR-7K-large model for ASR in French results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 args: fr metrics: - name: Test WER type: wer value: 11.44 - name: Test WER (+LM) type: wer value: 9.66 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Multilingual LibriSpeech (MLS) type: facebook/multilingual_librispeech args: french metrics: - name: Test WER type: wer value: 5.93 - name: Test WER (+LM) type: wer value: 5.13 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: VoxPopuli type: facebook/voxpopuli args: fr metrics: - name: Test WER type: wer value: 9.33 - name: Test WER (+LM) type: wer value: 8.51 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: African Accented French type: gigant/african_accented_french args: fr metrics: - name: Test WER type: wer value: 16.22 - name: Test WER (+LM) type: wer value: 15.39 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: fr metrics: - name: Test WER type: wer value: 16.56 - name: Test WER (+LM) type: wer value: 12.96 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Fleurs type: google/fleurs args: fr_fr metrics: - name: Test WER type: wer value: 10.10 - name: Test WER (+LM) type: wer value: 8.84 --- # Fine-tuned wav2vec2-FR-7K-large model for ASR in French <style> img { display: inline; } </style> ![Model architecture](https://img.shields.io/badge/Model_Architecture-Wav2Vec2--CTC-lightgrey) ![Model size](https://img.shields.io/badge/Params-315M-lightgrey) ![Language](https://img.shields.io/badge/Language-French-lightgrey) This model is a fine-tuned version of [LeBenchmark/wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large), trained on a composite dataset comprising of over 2200 hours of French speech audio, using the train and validation splits of [Common Voice 11.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0), [Multilingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech), [Voxpopuli](https://github.com/facebookresearch/voxpopuli), [Multilingual TEDx](http://www.openslr.org/100), [MediaSpeech](https://www.openslr.org/108), and [African Accented French](https://huggingface.co/datasets/gigant/african_accented_french). When using the model make sure that your speech input is also sampled at 16Khz. ## Usage 1. To use on a local audio file with the language model ```python import torch import torchaudio from transformers import AutoModelForCTC, Wav2Vec2ProcessorWithLM device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = AutoModelForCTC.from_pretrained("bhuang/asr-wav2vec2-french").to(device) processor_with_lm = Wav2Vec2ProcessorWithLM.from_pretrained("bhuang/asr-wav2vec2-french") model_sample_rate = processor_with_lm.feature_extractor.sampling_rate wav_path = "example.wav" # path to your audio file waveform, sample_rate = torchaudio.load(wav_path) waveform = waveform.squeeze(axis=0) # mono # resample if sample_rate != model_sample_rate: resampler = torchaudio.transforms.Resample(sample_rate, model_sample_rate) waveform = resampler(waveform) # normalize input_dict = processor_with_lm(waveform, sampling_rate=model_sample_rate, return_tensors="pt") with torch.inference_mode(): logits = model(input_dict.input_values.to(device)).logits predicted_sentence = processor_with_lm.batch_decode(logits.cpu().numpy()).text[0] ``` 2. To use on a local audio file without the language model ```python import torch import torchaudio from transformers import AutoModelForCTC, Wav2Vec2Processor device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = AutoModelForCTC.from_pretrained("bhuang/asr-wav2vec2-french").to(device) processor = Wav2Vec2Processor.from_pretrained("bhuang/asr-wav2vec2-french") model_sample_rate = processor.feature_extractor.sampling_rate wav_path = "example.wav" # path to your audio file waveform, sample_rate = torchaudio.load(wav_path) waveform = waveform.squeeze(axis=0) # mono # resample if sample_rate != model_sample_rate: resampler = torchaudio.transforms.Resample(sample_rate, model_sample_rate) waveform = resampler(waveform) # normalize input_dict = processor(waveform, sampling_rate=model_sample_rate, return_tensors="pt") with torch.inference_mode(): logits = model(input_dict.input_values.to(device)).logits # decode predicted_ids = torch.argmax(logits, dim=-1) predicted_sentence = processor.batch_decode(predicted_ids)[0] ``` ## Evaluation 1. To evaluate on `mozilla-foundation/common_voice_11_0` ```bash python eval.py \ --model_id "bhuang/asr-wav2vec2-french" \ --dataset "mozilla-foundation/common_voice_11_0" \ --config "fr" \ --split "test" \ --log_outputs \ --outdir "outputs/results_mozilla-foundatio_common_voice_11_0_with_lm" ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py \ --model_id "bhuang/asr-wav2vec2-french" \ --dataset "speech-recognition-community-v2/dev_data" \ --config "fr" \ --split "validation" \ --chunk_length_s 30.0 \ --stride_length_s 5.0 \ --log_outputs \ --outdir "outputs/results_speech-recognition-community-v2_dev_data_with_lm" ```
Zain6699/intent-classifier
Zain6699
2023-07-06T10:32:41Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T10:22:17Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: intent-classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # intent-classifier This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0590 - Accuracy: 0.9854 - F1: 0.9586 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
AAOBA/RND-PyamidsRND
AAOBA
2023-07-06T10:28:36Z
17
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-06T10:27:59Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: chikoto/RND-PyamidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
soduhh/marian-finetuned-kde4-en-to-fr
soduhh
2023-07-06T10:26:33Z
61
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-05T14:32:51Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: soduhh/marian-finetuned-kde4-en-to-fr results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # soduhh/marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6854 - Validation Loss: 0.8044 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0627 | 0.8795 | 0 | | 0.7968 | 0.8213 | 1 | | 0.6854 | 0.8044 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Tiru8055/rl_course_vizdoom_health_gathering_supreme
Tiru8055
2023-07-06T10:24:27Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T10:24:20Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 12.50 +/- 5.00 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r Tiru8055/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
thirupathibandam/autotrain-phanik-gpt-neo-125m-self-72606138970
thirupathibandam
2023-07-06T10:01:36Z
0
0
null
[ "autotrain", "text-generation", "dataset:thirupathibandam/autotrain-data-phanik-gpt-neo-125m-self", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T10:00:49Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " datasets: - thirupathibandam/autotrain-data-phanik-gpt-neo-125m-self co2_eq_emissions: emissions: 0.03549660564532989 --- # Model Trained Using AutoTrain - Problem type: Text Generation - CO2 Emissions (in grams): 0.0355 ## Validation Metrics loss: 1.8581730127334595
Sourabh2/speecht5_finetuned_model
Sourabh2
2023-07-06T09:58:19Z
73
1
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "dataset:voxpopuli", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-07-06T07:36:01Z
--- license: mit tags: - generated_from_trainer datasets: - voxpopuli model-index: - name: speecht5_finetuned_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_model This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4585 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5238 | 4.3 | 1000 | 0.4791 | | 0.4994 | 8.61 | 2000 | 0.4679 | | 0.4914 | 12.91 | 3000 | 0.4599 | | 0.4869 | 17.21 | 4000 | 0.4585 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
blanchefort/rubert-base-cased-sentiment-mokoron
blanchefort
2023-07-06T09:56:44Z
129
1
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "text-classification", "sentiment", "ru", "dataset:RuTweetCorp", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - ru tags: - sentiment - text-classification datasets: - RuTweetCorp --- # RuBERT for Sentiment Analysis of Tweets This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuTweetCorp](https://study.mokoron.com/). ## Labels 0: POSITIVE 1: NEGATIVE ## How to use ```python import torch from transformers import AutoModelForSequenceClassification from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-mokoron') model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-mokoron', return_dict=True) @torch.no_grad() def predict(text): inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**inputs) predicted = torch.nn.functional.softmax(outputs.logits, dim=1) predicted = torch.argmax(predicted, dim=1).numpy() return predicted ``` ## Dataset used for model training **[RuTweetCorp](https://study.mokoron.com/)** > Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора // Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116.
ketong3906/opus-mt-en-zh-finetuned-eng-to-chn
ketong3906
2023-07-06T09:53:13Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-06T09:50:14Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: opus-mt-en-zh-finetuned-eng-to-chn results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-zh-finetuned-eng-to-chn This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 1 | 6.2769 | 0.8101 | 73.625 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
ddmlproject/cassianatuzzi
ddmlproject
2023-07-06T09:48:26Z
30
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-06T09:44:16Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### cassianatuzzi Dreambooth model trained by ddmlproject with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/ddmlproject/cassianatuzzi/resolve/main/sample_images/cassianatuzzi.jpg) ![1](https://huggingface.co/ddmlproject/cassianatuzzi/resolve/main/sample_images/cassianatuzzi_(1).jpg) ![2](https://huggingface.co/ddmlproject/cassianatuzzi/resolve/main/sample_images/cassianatuzzi_(11).jpg) ![3](https://huggingface.co/ddmlproject/cassianatuzzi/resolve/main/sample_images/cassianatuzzi_(3).jpg) ![4](https://huggingface.co/ddmlproject/cassianatuzzi/resolve/main/sample_images/cassianatuzzi_(7).jpeg) ![5](https://huggingface.co/ddmlproject/cassianatuzzi/resolve/main/sample_images/cassianatuzzi_(9).jpg) ![6](https://huggingface.co/ddmlproject/cassianatuzzi/resolve/main/sample_images/cassianatuzzi_(8).jpg) ![7](https://huggingface.co/ddmlproject/cassianatuzzi/resolve/main/sample_images/cassianatuzzi_(4).jpg) ![8](https://huggingface.co/ddmlproject/cassianatuzzi/resolve/main/sample_images/cassianatuzzi_(6).jpeg) ![9](https://huggingface.co/ddmlproject/cassianatuzzi/resolve/main/sample_images/cassianatuzzi_(2).jpg) ![10](https://huggingface.co/ddmlproject/cassianatuzzi/resolve/main/sample_images/cassianatuzzi_(5).jpg) ![11](https://huggingface.co/ddmlproject/cassianatuzzi/resolve/main/sample_images/cassianatuzzi_(10).jpg)
arham061/codeparrot-ds
arham061
2023-07-06T09:47:41Z
127
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T09:36:58Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: codeparrot-ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codeparrot-ds This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
NasimB/gpt2-concat-cbt-rarity-all-7k-p8k
NasimB
2023-07-06T09:41:44Z
9
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T07:38:30Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-cbt-rarity-all-7k-p8k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-cbt-rarity-all-7k-p8k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.1838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7249 | 0.29 | 500 | 5.6400 | | 5.3729 | 0.59 | 1000 | 5.2003 | | 5.0283 | 0.88 | 1500 | 4.9502 | | 4.7537 | 1.17 | 2000 | 4.8035 | | 4.5903 | 1.47 | 2500 | 4.6765 | | 4.4832 | 1.76 | 3000 | 4.5717 | | 4.3484 | 2.05 | 3500 | 4.4930 | | 4.1512 | 2.35 | 4000 | 4.4467 | | 4.1329 | 2.64 | 4500 | 4.3805 | | 4.091 | 2.93 | 5000 | 4.3309 | | 3.8799 | 3.23 | 5500 | 4.3273 | | 3.8248 | 3.52 | 6000 | 4.2923 | | 3.8074 | 3.81 | 6500 | 4.2605 | | 3.6914 | 4.11 | 7000 | 4.2581 | | 3.534 | 4.4 | 7500 | 4.2538 | | 3.5261 | 4.69 | 8000 | 4.2382 | | 3.5255 | 4.99 | 8500 | 4.2256 | | 3.351 | 5.28 | 9000 | 4.2383 | | 3.3357 | 5.57 | 9500 | 4.2375 | | 3.3375 | 5.87 | 10000 | 4.2364 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
nolanaatama/mrdcrvcv2400pchscrckdfl
nolanaatama
2023-07-06T09:40:59Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T09:37:34Z
--- license: creativeml-openrail-m ---
RogerB/KinyaBERT-small-finetuned-kintweetsB
RogerB
2023-07-06T09:33:58Z
115
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-06T09:26:55Z
--- tags: - generated_from_trainer model-index: - name: KinyaBERT-small-finetuned-kintweetsB results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # KinyaBERT-small-finetuned-kintweetsB This model is a fine-tuned version of [jean-paul/KinyaBERT-small](https://huggingface.co/jean-paul/KinyaBERT-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.8000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.3312 | 1.0 | 900 | 3.9289 | | 4.0017 | 2.0 | 1800 | 3.8163 | | 3.8861 | 3.0 | 2700 | 3.7473 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
squeeze-ai-lab/sq-opt-13b-w4-s0
squeeze-ai-lab
2023-07-06T09:29:03Z
0
0
null
[ "arxiv:2306.07629", "arxiv:2205.01068", "region:us" ]
null
2023-07-06T08:38:38Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 4-bit quantized OPT 13B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). * **Base Model:** [OPT 13B](https://arxiv.org/abs/2205.01068) * **Bitwidth:** 4-bit * **Sparsity Level:** 0% (dense-only) ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
irfan62622/q-FrozenLake-v1-4x4-noSlippery
irfan62622
2023-07-06T09:29:01Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T09:28:58Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="irfan62622/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
squeeze-ai-lab/sq-opt-13b-w3-s0
squeeze-ai-lab
2023-07-06T09:25:37Z
0
0
null
[ "arxiv:2306.07629", "arxiv:2205.01068", "region:us" ]
null
2023-07-06T08:38:24Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 3-bit quantized OPT 13B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). * **Base Model:** [OPT 13B](https://arxiv.org/abs/2205.01068) * **Bitwidth:** 3-bit * **Sparsity Level:** 0% (dense-only) ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
ireneli1024/bigbird-pegasus-large-pubmed-plos-finetuned
ireneli1024
2023-07-06T09:18:37Z
88
0
transformers
[ "transformers", "pytorch", "bigbird_pegasus", "text2text-generation", "text-generation-inference", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-05T05:58:53Z
--- license: other language: - en metrics: - rouge tags: - text-generation-inference --- This is the finetuned model based on the [google/bigbird-pegasus-large-pubmed](https://huggingface.co/google/bigbird-pegasus-large-pubmed) model. The data is from BioLaySumm 2023 [shared task 1](https://biolaysumm.org/#data).
HilbertS/rl_course_vizdoom_health_gathering_supreme
HilbertS
2023-07-06T09:16:47Z
2
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T15:06:01Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 10.39 +/- 5.13 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r HilbertS/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
Sekiraw/space_invaders
Sekiraw
2023-07-06T09:16:19Z
2
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-05T12:58:30Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 251.50 +/- 28.46 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sekiraw -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sekiraw -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Sekiraw ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 200000), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
squeeze-ai-lab/sq-opt-6.7b-w4-s0
squeeze-ai-lab
2023-07-06T09:11:07Z
0
0
null
[ "arxiv:2306.07629", "arxiv:2205.01068", "region:us" ]
null
2023-07-06T08:28:51Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 4-bit quantized OPT 6.7B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). * **Base Model:** [OPT 6.7B](https://arxiv.org/abs/2205.01068) * **Bitwidth:** 4-bit * **Sparsity Level:** 0% (dense-only) ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
NasimB/gpt2-concat-aochildes-length-16k-rarity-all-4k-1p2k
NasimB
2023-07-06T09:01:21Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T06:55:39Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-aochildes-length-16k-rarity-all-4k-1p2k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-aochildes-length-16k-rarity-all-4k-1p2k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.1849 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7383 | 0.3 | 500 | 5.6385 | | 5.3727 | 0.59 | 1000 | 5.1984 | | 5.0316 | 0.89 | 1500 | 4.9474 | | 4.7489 | 1.18 | 2000 | 4.7974 | | 4.5919 | 1.48 | 2500 | 4.6733 | | 4.481 | 1.77 | 3000 | 4.5743 | | 4.3448 | 2.07 | 3500 | 4.5035 | | 4.1586 | 2.36 | 4000 | 4.4505 | | 4.131 | 2.66 | 4500 | 4.3894 | | 4.0922 | 2.95 | 5000 | 4.3352 | | 3.8662 | 3.25 | 5500 | 4.3390 | | 3.8273 | 3.54 | 6000 | 4.3014 | | 3.8116 | 3.84 | 6500 | 4.2720 | | 3.6686 | 4.13 | 7000 | 4.2734 | | 3.5444 | 4.43 | 7500 | 4.2662 | | 3.5274 | 4.73 | 8000 | 4.2522 | | 3.5039 | 5.02 | 8500 | 4.2497 | | 3.3378 | 5.32 | 9000 | 4.2560 | | 3.336 | 5.61 | 9500 | 4.2548 | | 3.3376 | 5.91 | 10000 | 4.2538 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
viceisi/identify-my-cat
viceisi
2023-07-06T08:54:29Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-06-28T15:18:19Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
aronmal/Reinforce-PixelCopterMLP
aronmal
2023-07-06T08:42:01Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T08:41:58Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopterMLP results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 18.60 +/- 14.97 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
smaciu/bee-wings-classifier
smaciu
2023-07-06T08:32:55Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-06-24T10:25:38Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
m-aliabbas1/q-FrozenLake-v1-4x4-noSlippery
m-aliabbas1
2023-07-06T08:31:44Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T08:31:42Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="m-aliabbas1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
CICLAB-Comillas/BARTSumpson
CICLAB-Comillas
2023-07-06T08:12:24Z
106
1
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-05T08:49:23Z
--- license: mit language: - es ---
symanto/mpnet-base-snli-mnli
symanto
2023-07-06T07:54:17Z
136
4
transformers
[ "transformers", "pytorch", "safetensors", "mpnet", "text-classification", "zero-shot-classification", "en", "dataset:SNLI", "dataset:MNLI", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2022-03-02T23:29:05Z
--- language: - en datasets: - SNLI - MNLI tags: - zero-shot-classification --- A cross-attention NLI model trained for zero-shot and few-shot text classification. The base model is [mpnet-base](https://huggingface.co/microsoft/mpnet-base), trained with the code from [here](https://github.com/facebookresearch/anli); on [SNLI](https://nlp.stanford.edu/projects/snli/) and [MNLI](https://cims.nyu.edu/~sbowman/multinli/). Usage: ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch import numpy as np model = AutoModelForSequenceClassification.from_pretrained("symanto/mpnet-base-snli-mnli") tokenizer = AutoTokenizer.from_pretrained("symanto/mpnet-base-snli-mnli") input_pairs = [("I like this pizza.", "The sentence is positive."), ("I like this pizza.", "The sentence is negative.")] inputs = tokenizer(["</s></s>".join(input_pair) for input_pair in input_pairs], return_tensors="pt") logits = model(**inputs).logits probs = torch.softmax(logits, dim=1).tolist() print("probs", probs) np.testing.assert_almost_equal(probs, [[0.86, 0.14, 0.00], [0.16, 0.15, 0.69]], decimal=2) ```
aronmal/Reinforce-CartpoleMLP
aronmal
2023-07-06T07:53:32Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T07:53:23Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-MLP results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 464.00 +/- 91.98 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Technotech/opt-125m-4bit-128g
Technotech
2023-07-06T07:51:47Z
5
1
transformers
[ "transformers", "opt", "text-generation", "en", "arxiv:2205.01068", "arxiv:2005.14165", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2023-06-12T08:04:01Z
--- language: en inference: false tags: - text-generation - opt license: other commercial: false --- ## OPT-125m-4bit-128g OPT 125M, quantised to 4bit using AutoGPTQ, with groupsize 128g, no act order. Good for testing AutoGPTQ with a small model download. # Original Model Card # OPT : Open Pre-trained Transformer Language Models OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI. **Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf). Content from **this** model card has been written by the Hugging Face team. ## Intro To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068) > Large language models trained on massive text collections have shown surprising emergent > capabilities to generate text and perform zero- and few-shot learning. While in some cases the public > can interact with these models through paid APIs, full model access is currently limited to only a > few highly resourced labs. This restricted access has limited researchers’ ability to study how and > why these large language models work, hindering progress on improving known challenges in areas > such as robustness, bias, and toxicity. > We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M > to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match > the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data > collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and > to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the > collective research community as a whole, which is only possible when models are available for study. ## Model description OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective. OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective. For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read the [official paper](https://arxiv.org/abs/2205.01068). ## Intended uses & limitations The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation. In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt). ### How to use You can use this model directly with a pipeline for text generation. ```python >>> from transformers import pipeline >>> generator = pipeline('text-generation', model="facebook/opt-125m") >>> generator("Hello, I'm am conscious and") [{'generated_text': 'Hello, I am conscious and aware of the fact that I am a woman. I am aware of'}] ``` By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`. ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-125m", do_sample=True) >>> generator("Hello, I'm am conscious and") [{'generated_text': 'Hello, I am conscious and active member of the Khaosan Group, a private, self'}] ``` ### Limitations and bias As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral the model is strongly biased : > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. This bias will also affect all fine-tuned versions of this model. ## Training data The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents: - BookCorpus, which consists of more than 10K unpublished books, - CC-Stories, which contains a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas, - The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included. - Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in Roller et al. (2021) - CCNewsV2 containing an updated version of the English portion of the CommonCrawl News dataset that was used in RoBERTa (Liu et al., 2019b) The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally to each dataset’s size in the pretraining corpus. The dataset might contains offensive content as parts of the dataset are a subset of public Common Crawl data, along with a subset of public Reddit data, which could contain sentences that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety. ### Collection process The dataset was collected form internet, and went through classic data processing algorithms and re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or *This ebook by Project Gutenberg.* ## Training procedure ### Preprocessing The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens. The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training. ### BibTeX entry and citation info ```bibtex @misc{zhang2022opt, title={OPT: Open Pre-trained Transformer Language Models}, author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer}, year={2022}, eprint={2205.01068}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
xian79/Reinforce-CartPole-v1
xian79
2023-07-06T07:51:38Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T07:51:27Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Technotech/RedPajama-Base-3B-4bit-128g
Technotech
2023-07-06T07:49:49Z
5
0
transformers
[ "transformers", "gpt_neox", "text-generation", "gptq", "en", "dataset:togethercomputer/RedPajama-Data-1T", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-12T09:18:42Z
--- license: apache-2.0 language: - en datasets: - togethercomputer/RedPajama-Data-1T tags: - gptq --- ## RedPajama-Base-3B-4bit-128g RedPajama 3B, quantised to 4bit with groupsize of 128, no act order. # Original Model Card # RedPajama-INCITE-Base-3B-v1 RedPajama-INCITE-Base-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION. The training was done on 3,072 V100 GPUs provided as part of the INCITE 2023 project on Scalable Foundation Models for Transferrable Generalist AI, awarded to MILA, LAION, and EleutherAI in fall 2022, with support from the Oak Ridge Leadership Computing Facility (OLCF) and INCITE program. - Base Model: [RedPajama-INCITE-Base-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1) - Instruction-tuned Version: [RedPajama-INCITE-Instruct-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1) - Chat Version: [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1) ## Model Details - **Developed by**: Together Computer. - **Model type**: Language Model - **Language(s)**: English - **License**: Apache 2.0 - **Model Description**: A 2.8B parameter pretrained language model. # Quick Start Please note that the model requires `transformers` version >= 4.25.1. ## GPU Inference This requires a GPU with 8GB memory. ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", torch_dtype=torch.float16) model = model.to('cuda:0') # infer prompt = "Alan Turing is" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True, ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ a name that has been synonymous with the computer age since the 1950s. The British mathematician, logician, and cryptanalyst is widely regarded as the father of modern computing. His contributions to the development of the modern computer and the theory of computation have had a profound impact on the world we live in today. Turing’s contributions to the development of the modern computer were made in the 1940s and 1950s. He is most famous for his work on the Turing machine, a theoretical model of a computing machine that was able to perform all the mathematical operations of a computer. Turing’s work on the... """ ``` ## GPU Inference in Int8 To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command: ```bash pip install accelerate pip install bitsandbytes ``` Then you can run inference with int8 as follows: ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True) # infer prompt = "Alan Turing is" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ the man who cracked the Enigma code during World War II, and who was later convicted of homosexual acts. He was a brilliant mathematician, and a visionary who foresaw the computer age.... """ ``` ## CPU Inference You can run inference on CPU as follows: ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", torch_dtype=torch.bfloat16) # infer prompt = "Alan Turing is" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ a name that is synonymous with the history of computer science. As the man who invented the Turing machine, the mathematical model that defines the limits of what can be computed, Turing is credited with the invention of the modern computer. Turing was also a mathematician and logician, and his work in these fields led to the development of the field of artificial intelligence... """ ``` Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference. # Uses Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner. #### Out-of-Scope Use `RedPajama-INCITE-Base-3B-v1` is a language model and may not perform well for other use cases outside of its intended scope. For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society. It is important to consider the limitations of the model and to only use it for its intended purpose. #### Misuse and Malicious Use `RedPajama-INCITE-Base-3B-v1` is designed for language modeling. Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project. Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating fake news, misinformation, or propaganda - Promoting hate speech, discrimination, or violence against individuals or groups - Impersonating individuals or organizations without their consent - Engaging in cyberbullying or harassment - Defamatory content - Spamming or scamming - Sharing confidential or sensitive information without proper authorization - Violating the terms of use of the model or the data used to train it - Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming ## Limitations `RedPajama-INCITE-Base-3B-v1`, like other language models, has limitations that should be taken into consideration. For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data. We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot. ## Training **Training Data** Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) **Training Procedure** - **Hardware:** 256 nodes of 6xV100 (IBM Power9), on the OLCF Summit cluster - **Optimizer:** Apex FusedAdam - **Parallelism:** Pipeline parallel 6, tensor parallel 2 - **Gradient Accumulations**: 8 (global batch size 4M tokens) - **Num of Tokens:** 800B Tokens - **Learning rate:** 0.00016 ## Benchmark Please refer to our [blog post](https://together.xyz) for benchmark results. ## Community Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
atrytone/MIReAD-Neuro-Contrastive
atrytone
2023-07-06T07:40:38Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-06T07:38:47Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 480 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 100, "evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Vtmpas/ppo-LunarLander-v2
Vtmpas
2023-07-06T07:36:16Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T07:35:49Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 240.43 +/- 16.07 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Word2vec/nlpl_224
Word2vec
2023-07-06T07:31:46Z
0
0
null
[ "word2vec", "ukr", "dataset:Ukrainian_CoNLL17_corpus", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T08:02:16Z
--- language: ukr license: cc-by-4.0 tags: - word2vec datasets: Ukrainian_CoNLL17_corpus --- ## Information A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 99884 corresponding to 299668196 tokens from the dataset `Ukrainian_CoNLL17_corpus`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Bag-of-Words with window of 10 and dimension of 200. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_224", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/224.zip
Word2vec/nlpl_223
Word2vec
2023-07-06T07:31:31Z
0
1
null
[ "word2vec", "eng", "dataset:English_Wikipedia_Dump_of_November_2021", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T08:01:57Z
--- language: eng license: cc-by-4.0 tags: - word2vec datasets: English_Wikipedia_Dump_of_November_2021 --- ## Information A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 199430 corresponding to 2717675616 tokens from the dataset `English_Wikipedia_Dump_of_November_2021`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 2 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_223", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/223.zip
Word2vec/nlpl_208
Word2vec
2023-07-06T07:30:26Z
0
0
null
[ "word2vec", "pol", "dataset:Polish_CommonCrawl_Dump_of_December_2019", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T08:25:40Z
--- language: pol license: cc-by-4.0 tags: - word2vec datasets: Polish_CommonCrawl_Dump_of_December_2019 --- ## Information A word2vec model trained by Krzysztof Wolk (kwolk@pja.edu.pl) on a vocabulary of size 35193029 corresponding to 32565035188 tokens from the dataset `Polish_CommonCrawl_Dump_of_December_2019`. The model is trained with the following properties: no lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 100. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_208", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/208.zip
Word2vec/nlpl_207
Word2vec
2023-07-06T07:30:10Z
0
0
null
[ "word2vec", "pol", "dataset:Polish_CommonCrawl_Dump_of_December_2019", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T09:08:03Z
--- language: pol license: cc-by-4.0 tags: - word2vec datasets: Polish_CommonCrawl_Dump_of_December_2019 --- ## Information A word2vec model trained by Krzysztof Wolk (kwolk@pja.edu.pl) on a vocabulary of size 35193029 corresponding to 32565035188 tokens from the dataset `Polish_CommonCrawl_Dump_of_December_2019`. The model is trained with the following properties: no lemmatization and postag with the algorith Gensim Continuous Bag-of-Words with window of 5 and dimension of 100. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_207", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/207.zip
NTQAI/pedestrian_gender_recognition
NTQAI
2023-07-06T07:29:58Z
45,879
15
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "beit", "image-classification", "vision", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-01-06T04:37:51Z
--- license: apache-2.0 tags: - image-classification - vision - generated_from_trainer metrics: - accuracy model-index: - name: outputs results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9107332624867163 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # outputs This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the [PETA dataset](http://mmlab.ie.cuhk.edu.hk/projects/PETA_files/Pedestrian%20Attribute%20Recognition%20At%20Far%20Distance.pdf) dataset. It achieves the following results on the evaluation set: - Loss: 0.2170 - Accuracy: 0.9107 ## Model description More information needed #### How to use You can use this model with Transformers *pipeline* . ```python from transformers import pipeline gender_classifier = pipeline(model="NTQAI/pedestrian_gender_recognition") image_path = "abc.jpg" results = gender_classifier(image_path) print(results) ``` ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5193 | 1.0 | 2000 | 0.3346 | 0.8533 | | 0.337 | 2.0 | 4000 | 0.2892 | 0.8778 | | 0.3771 | 3.0 | 6000 | 0.2493 | 0.8969 | | 0.3819 | 4.0 | 8000 | 0.2275 | 0.9100 | | 0.3581 | 5.0 | 10000 | 0.2170 | 0.9107 | ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1 ### Contact information For personal communication related to this project, please contact Nha Nguyen Van (nha282@gmail.com).
Word2vec/nlpl_206
Word2vec
2023-07-06T07:29:52Z
0
0
null
[ "word2vec", "pol", "dataset:Polish_CommonCrawl_Dump_of_December_2019", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T08:09:12Z
--- language: pol license: cc-by-4.0 tags: - word2vec datasets: Polish_CommonCrawl_Dump_of_December_2019 --- ## Information A word2vec model trained by Krzysztof Wolk (kwolk@pja.edu.pl) on a vocabulary of size 4885806 corresponding to 32565035188 tokens from the dataset `Polish_CommonCrawl_Dump_of_December_2019`. The model is trained with the following properties: no lemmatization and postag with the algorith fastText Skipgram with window of 5 and dimension of 100. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_206", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/206.zip
NTQAI/pedestrian_age_recognition
NTQAI
2023-07-06T07:28:59Z
110,387
3
transformers
[ "transformers", "pytorch", "safetensors", "beit", "image-classification", "vision", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-01-09T03:36:33Z
--- license: apache-2.0 tags: - image-classification - vision - generated_from_trainer metrics: - accuracy model-index: - name: pedestrian_age_recognition_local results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8073394495412844 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pedestrian_age_recognition_local This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5004 - Accuracy: 0.8073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.8849 | 1.0 | 2008 | 0.7939 | 0.6807 | | 0.9836 | 2.0 | 4016 | 0.6694 | 0.7336 | | 0.8128 | 3.0 | 6024 | 0.5768 | 0.7668 | | 0.7611 | 4.0 | 8032 | 0.5541 | 0.7833 | | 0.6441 | 5.0 | 10040 | 0.5473 | 0.7773 | | 0.5696 | 6.0 | 12048 | 0.5187 | 0.7971 | | 0.6925 | 7.0 | 14056 | 0.5082 | 0.8038 | | 0.5711 | 8.0 | 16064 | 0.5092 | 0.8098 | | 0.7741 | 9.0 | 18072 | 0.5026 | 0.8020 | | 0.5269 | 10.0 | 20080 | 0.5004 | 0.8073 | ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1 ### Contact information For personal communication related to this project, please contact Nha Nguyen Van (nha282@gmail.com).
Word2vec/nlpl_200
Word2vec
2023-07-06T07:28:57Z
0
0
null
[ "word2vec", "eng", "dataset:English_Wikipedia_Dump_of_October_2019", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T07:56:11Z
--- language: eng license: cc-by-4.0 tags: - word2vec datasets: English_Wikipedia_Dump_of_October_2019 --- ## Information A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 249212 corresponding to 3530685741 tokens from the dataset `English_Wikipedia_Dump_of_October_2019`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 3 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_200", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/200.zip
Word2vec/nlpl_186
Word2vec
2023-07-06T07:28:40Z
0
0
null
[ "word2vec", "rus", "dataset:Taiga_corpus", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T07:55:53Z
--- language: rus license: cc-by-4.0 tags: - word2vec datasets: Taiga_corpus --- ## Information A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 249946 corresponding to 4867000000 tokens from the dataset `Taiga_corpus`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_186", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/186.zip
Word2vec/nlpl_184
Word2vec
2023-07-06T07:28:01Z
0
0
null
[ "word2vec", "rus", "dataset:Russian_News", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T07:55:10Z
--- language: rus license: cc-by-4.0 tags: - word2vec datasets: Russian_News --- ## Information A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 249318 corresponding to 2550000000 tokens from the dataset `Russian_News`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_184", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/184.zip
Word2vec/nlpl_180
Word2vec
2023-07-06T07:27:01Z
0
0
null
[ "word2vec", "rus", "dataset:Russian_National_Corpus", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T07:54:19Z
--- language: rus license: cc-by-4.0 tags: - word2vec datasets: Russian_National_Corpus --- ## Information A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 189193 corresponding to 270000000 tokens from the dataset `Russian_National_Corpus`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Bag-of-Words with window of 20 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_180", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/180.zip
Bugsys0302/m416
Bugsys0302
2023-07-06T07:16:46Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T07:06:10Z
--- license: creativeml-openrail-m ---
Bugsys0302/beltbr
Bugsys0302
2023-07-06T06:59:17Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T06:57:43Z
--- license: creativeml-openrail-m ---
afaan00733/my_awesome_model
afaan00733
2023-07-06T06:56:30Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-04T21:18:08Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6546 - Accuracy: 0.4737 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 2 | 0.6732 | 0.4737 | | No log | 2.0 | 4 | 0.6546 | 0.4737 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.13.3
rohanbalkondekar/spicy-caiman
rohanbalkondekar
2023-07-06T06:55:23Z
10
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-06T06:48:59Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.30.1 pip install accelerate==0.20.3 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="BeRohan/spicy-caiman", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?</s><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "BeRohan/spicy-caiman", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "BeRohan/spicy-caiman", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "BeRohan/spicy-caiman" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?</s><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(32000, 4096, padding_idx=0) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=4096, bias=False) (v_proj): Linear(in_features=4096, out_features=4096, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=4096, out_features=11008, bias=False) (down_proj): Linear(in_features=11008, out_features=4096, bias=False) (up_proj): Linear(in_features=4096, out_features=11008, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=4096, out_features=32000, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Model Validation Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). ```bash CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=BeRohan/spicy-caiman --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log ``` ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
guaguale/path-to-save-model
guaguale
2023-07-06T06:50:20Z
0
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-05T09:49:11Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - guaguale/path-to-save-model These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
IliyanGochev/whisper-small-bg
IliyanGochev
2023-07-06T06:50:12Z
18
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "bg", "dataset:mozilla-foundation/common_voice_13_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-05T08:04:03Z
--- language: - bg license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer model-index: - name: whisper-small-bg results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_13_0 bg type: mozilla-foundation/common_voice_13_0 config: bg split: test args: bg metrics: - name: Wer type: wer value: 44.67291341315287 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-bg This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_13_0 bg dataset. It achieves the following results on the evaluation set: - Loss: 9.0612 - Wer: 44.6729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 4.9319 | 6.76 | 1000 | 10.0774 | 73.9892 | | 2.6116 | 13.51 | 2000 | 11.4089 | 67.0484 | | 0.9607 | 20.27 | 3000 | 11.8266 | 60.9448 | | 0.3464 | 27.03 | 4000 | 9.9500 | 52.1213 | | 0.0122 | 33.78 | 5000 | 9.0612 | 44.6729 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
Bugsys0302/fmmstrb
Bugsys0302
2023-07-06T06:46:46Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T06:40:45Z
--- license: creativeml-openrail-m ---
aroot/eng-mya-simcse_random
aroot
2023-07-06T06:36:24Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T06:14:10Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_random results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_random This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8977 - Bleu: 4.1368 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
aroot/eng-mya-simcse_central
aroot
2023-07-06T06:36:12Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T06:14:05Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_central results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_central This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8980 - Bleu: 4.1973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
cherrue/RandomCrop_Rescale_epoch_3_learning_rate_5e_5_decay_0_01
cherrue
2023-07-06T06:30:06Z
63
0
transformers
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-06T05:35:06Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: cherrue/pricetag_classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # cherrue/pricetag_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0546 - Validation Loss: 1.2226 - Train Accuracy: 0.3846 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1251, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 1.3379 | 1.2276 | 0.5128 | 0 | | 1.1973 | 1.1561 | 0.4615 | 1 | | 1.0546 | 1.2226 | 0.3846 | 2 | ### Framework versions - Transformers 4.28.0 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Waterhorse/chessgpt-base-v1
Waterhorse
2023-07-06T06:19:40Z
83
6
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "en", "dataset:Waterhorse/chess_data", "arxiv:2306.09200", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-02T22:03:14Z
--- license: apache-2.0 language: - en datasets: - Waterhorse/chess_data --- # Chessgpt-Base-3B-v1 Chessgpt-Base-v1 is the base model of Chessgpt. - Base Model: [Chessgpt-base-v1](https://huggingface.co/Waterhorse/chessgpt-base-v1) - Chat Version: [chessgpt-chat-v1](https://huggingface.co/Waterhorse/chessgpt-chat-v1) Also, we are actively working on the development of the next-generation model, ChessGPT-V2. We welcome any contribution, especially on chess related dataset. For related matters, please contact xidong.feng.20@ucl.ac.uk. ## Model Details - **Model type**: Language Model - **Language(s)**: English - **License**: Apache 2.0 - **Model Description**: A 2.8B parameter pretrained language model in Chess. ## GPU Inference This requires a GPU with 8GB memory. ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("Waterhorse/chessgpt-base-v1") model = AutoModelForCausalLM.from_pretrained("Waterhorse/chessgpt-base-v1", torch_dtype=torch.float16) model = model.to('cuda:0') # infer # Conversation between two prompt = "Q: 1.e4 c5, what is the name of this opening?A:" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True, ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) ``` # Uses Excluded uses are described below. ### Direct Use `chessgpt-base-v1` is mainly for research on large language model, especially for those research about policy learning and language modeling. #### Out-of-Scope Use `chessgpt-base-v1` is a language model trained on chess related data and may not perform well for other use cases beyond chess domain. #### Bias, Risks, and Limitations Just as with any language model, chessgpt-base-v1 carries inherent limitations that necessitate careful consideration. Specifically, it may occasionally generate responses that are irrelevant or incorrect, particularly when tasked with interpreting complex or ambiguous queries. Additionally, given that its training is rooted in online data, the model may inadvertently reflect and perpetuate common online stereotypes and biases. # Evaluation Please refer to our [paper](https://arxiv.org/abs/2306.09200) and [code](https://github.com/waterhorse1/ChessGPT)for benchmark results. # Citation Information ```bash @article{feng2023chessgpt, title={ChessGPT: Bridging Policy Learning and Language Modeling}, author={Feng, Xidong and Luo, Yicheng and Wang, Ziyan and Tang, Hongrui and Yang, Mengyue and Shao, Kun and Mguni, David and Du, Yali and Wang, Jun}, journal={arXiv preprint arXiv:2306.09200}, year={2023} } ```
sukritiverma/thumbs-up-tom_cruise
sukritiverma
2023-07-06T06:14:17Z
1
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-05T23:31:34Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - sukritiverma/thumbs-up-tom_cruise These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
yuuhan/roberta-base-rte-lora
yuuhan
2023-07-06T06:12:21Z
6
0
peft
[ "peft", "text-classification", "en", "dataset:SetFit/rte", "license:apache-2.0", "region:us" ]
text-classification
2023-07-06T06:03:00Z
--- license: apache-2.0 datasets: - SetFit/rte language: - en metrics: - accuracy library_name: peft pipeline_tag: text-classification --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details Accuracy: 0.7328519855595668 on RTE
yuuhan/roberta-base-mnli-lora
yuuhan
2023-07-06T06:01:55Z
0
0
peft
[ "peft", "text-classification", "en", "dataset:SetFit/mnli", "license:apache-2.0", "region:us" ]
text-classification
2023-07-06T05:57:19Z
--- license: apache-2.0 datasets: - SetFit/mnli language: - en metrics: - accuracy library_name: peft pipeline_tag: text-classification --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details Accurate: 0.8654100866021396 on glue/mnli
LarryAIDraw/sakurako
LarryAIDraw
2023-07-06T06:00:57Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T05:27:47Z
--- license: creativeml-openrail-m --- https://civitai.com/models/100652/sakurako-busujima-grand-blue
aroot/eng-guj-simcse_central
aroot
2023-07-06T05:52:24Z
102
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T05:29:33Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_central results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_central This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2829 - Bleu: 2.7255 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
mazeinmouse/a2c-PandaReachDense-v2
mazeinmouse
2023-07-06T05:32:52Z
2
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T05:29:58Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -2.88 +/- 0.45 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
insub/distilbert-base-uncased-finetuned-imdb
insub
2023-07-06T05:22:05Z
124
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-06T05:17:00Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4897 | | 2.5796 | 2.0 | 314 | 2.4230 | | 2.5269 | 3.0 | 471 | 2.4354 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
aroot/eng-fra-simcse_central
aroot
2023-07-06T05:13:08Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T04:53:14Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_central results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_central This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1521 - Bleu: 31.5479 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
ashmitg/model_lora
ashmitg
2023-07-06T05:11:34Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-04T22:28:40Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
AAOBA/ppo-PyramidsRND
AAOBA
2023-07-06T05:05:37Z
8
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-06T05:04:49Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: chikoto/ppo-PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
squeeze-ai-lab/sq-xgen-7b-8k-base-w4-s45
squeeze-ai-lab
2023-07-06T04:47:33Z
0
0
null
[ "arxiv:2306.07629", "region:us" ]
null
2023-07-06T03:46:56Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 4-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM. More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base). * **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research) * **Bitwidth:** 4-bit * **Sparsity Level:** 0.45% ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
squeeze-ai-lab/sq-xgen-7b-8k-base-w3-s45
squeeze-ai-lab
2023-07-06T04:46:32Z
0
0
null
[ "arxiv:2306.07629", "region:us" ]
null
2023-07-06T03:46:53Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 3-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM. More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base). * **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research) * **Bitwidth:** 3-bit * **Sparsity Level:** 0.45% ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
KPrashanth/Reinforce_Agent_playing_Cartpole_v1
KPrashanth
2023-07-06T04:36:55Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T04:36:41Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce_Agent_playing_Cartpole_v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
omnitron/PPO-Huggy
omnitron
2023-07-06T04:23:24Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-06T04:22:59Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: omnitron/PPO-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ocisd4/openllama-zh-7B
ocisd4
2023-07-06T04:13:52Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T03:46:10Z
```python import torch from transformers import LlamaTokenizer, LlamaForCausalLM import transformers tokenizer = LlamaTokenizer.from_pretrained( 'ocisd4/openllama-zh', add_bos_token=False, add_eos_token=False, use_auth_token=True, use_fast=False) model = LlamaForCausalLM.from_pretrained('ocisd4/openllama-zh', device_map='auto',use_auth_token=True) prompt = '關於華碩的傳說' input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=256, do_sample=True, top_k=40, top_p=0.95, temperature=0.7, repetition_penalty=1.08, ) print(tokenizer.decode(generation_output[0])) ``` The is a 7B pretrain model, train from openllama pretrain weight, context size=2048 **keep updating new model**
dangvansam/whisper-base-vi
dangvansam
2023-07-06T04:09:35Z
75
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "vi", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-05T10:42:24Z
--- language: - vi pipeline_tag: automatic-speech-recognition ---
squeeze-ai-lab/sq-xgen-7b-8k-inst-w4-s45
squeeze-ai-lab
2023-07-06T03:58:19Z
0
0
null
[ "arxiv:2306.07629", "region:us" ]
null
2023-07-06T03:47:10Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 4-bit XGen-7B instruction-tuned model (i.e. finetuned model on public domain instructional data) with 8K sequence length quantized using SqueezeLLM. More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-inst). * **Base Model:** [XGen-7B-8K-Inst](https://huggingface.co/Salesforce/xgen-7b-8k-inst) (by Salesforce AI Research) * **Bitwidth:** 4-bit * **Sparsity Level:** 0.45% ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
aroot/eng-guj-wsample.43a
aroot
2023-07-06T03:44:33Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T03:21:38Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-wsample.43a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-wsample.43a This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2191 - Bleu: 2.9237 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
ddoc/pta
ddoc
2023-07-06T03:42:24Z
0
0
null
[ "region:us" ]
null
2023-07-06T03:41:49Z
# stable-diffusion-webui-prompt-travel Travel between prompts in the latent space to make pseudo-animation, extension script for AUTOMATIC1111/stable-diffusion-webui. ---- <p align="left"> <a href="https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel/commits"><img alt="Last Commit" src="https://img.shields.io/github/last-commit/Kahsolt/stable-diffusion-webui-prompt-travel"></a> <a href="https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel/issues"><img alt="GitHub issues" src="https://img.shields.io/github/issues/Kahsolt/stable-diffusion-webui-prompt-travel"></a> <a href="https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel/stargazers"><img alt="GitHub stars" src="https://img.shields.io/github/stars/Kahsolt/stable-diffusion-webui-prompt-travel"></a> <a href="https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel/network"><img alt="GitHub forks" src="https://img.shields.io/github/forks/Kahsolt/stable-diffusion-webui-prompt-travel"></a> <img alt="Language" src="https://img.shields.io/github/languages/top/Kahsolt/stable-diffusion-webui-prompt-travel"> <img alt="License" src="https://img.shields.io/github/license/Kahsolt/stable-diffusion-webui-prompt-travel"> <br/> </p> ![:stable-diffusion-webui-prompt-travel](https://count.getloli.com/get/@:stable-diffusion-webui-prompt-travel) Try interpolating on the hidden vectors of conditioning prompt to make seemingly-continuous image sequence, or let's say a pseudo-animation. 😀 Not only prompts! We also support non-prompt conditions, read => [README_ext.md](README_ext.md) ~ ⚠ 我们成立了插件反馈 QQ 群: 616795645 (赤狐屿),欢迎出建议、意见、报告bug等 (w ⚠ We have a QQ chat group (616795645) now, any suggestions, discussions and bug reports are highly wellllcome!! ℹ 实话不说,我想有可能通过这个来做ppt童话绘本<del>甚至本子</del>…… ℹ 聪明的用法:先手工盲搜两张好看的图 (只有prompt差异),然后再尝试在其间 travel :lolipop: ⚠ Remeber to check "Always save all generated images" on in the settings tab, otherwise "upscaling" and "saving intermediate images" would not work. ⚠ 记得在设置页勾选 “总是保存所有生成的图片”,否则 上采样 与 保存中间图片 将无法工作。 ### Change Log ⚪ Compatibility The latest version `v3.0` is synced & tested with: - [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui): version `v1.4.0`, tag [v1.4.0](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.4.0) - [Mikubill/sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet): version `v1.1.229`, commit [eceeec7a7e](https://github.com/Mikubill/sd-webui-controlnet/commit/eceeec7a7e856867de56e26cae9f3e1076480344) ⚪ Features - 2023/07/05: `v3.0` re-impl core with sd-webui `v1.4.0` callbacks; this new implementation will be slower, but more compatible with other extensions - 2023/04/13: `v2.7` add RIFE to controlnet-travel, skip fusion (experimental) - 2023/03/31: `v2.6` add a tkinter [GUI](#run-each-time) for postprocess toolchain - 2023/03/30: `v2.5` add controlnet-travel script (experimental), interpolating between hint conditions **instead of prompts**, thx for the code base from [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet) - 2023/02/14: `v2.3` integrate basic function of [depth-image-io](https://github.com/AnonymousCervine/depth-image-io-for-SDWebui) for depth2img models - 2023/01/27: `v2.2` add 'slerp' linear interpolation method - 2023/01/22: `v2.1` add experimental 'replace' mode again, it's not smooth interpolation - 2023/01/20: `v2.0` add optional external [post-processing pipeline](#post-processing-pipeline) to highly boost up smoothness, greate thx to [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) and [RIFE](https://github.com/nihui/rife-ncnn-vulkan)!! - 2023/01/16: `v1.5` add upscale options (issue #12); add 'embryo' genesis, reproducing idea of [stable-diffusion-animation](https://replicate.com/andreasjansson/stable-diffusion-animation) except [FILM](https://github.com/google-research/frame-interpolation) support (issue #11) - 2023/01/12: `v1.4` remove 'replace' & 'grad' mode support, due to webui's code change - 2022/12/11: `v1.3` work in a more 'successive' way, idea borrowed from [deforum](https://github.com/deforum-art/deforum-for-automatic1111-webui) ('genesis' option) - 2022/11/14: `v1.2` walk by substituting token embedding ('replace' mode) - 2022/11/13: `v1.1` walk by optimizing condition ('grad' mode) - 2022/11/10: `v1.0` interpolate linearly on condition/uncondition ('linear' mode) ⚪ Fixups - 2023/07/05: sync sd-webui-controlnet to `v1.1.229` - 2023/04/30: update controlnet core to `v1.1.116` - 2023/03/29: `v2.4` bug fixes on script hook, now working correctly with extra networks & [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet) - 2023/01/31: keep up with webui's updates, (issue #14: `ImportError: cannot import name 'single_sample_to_image'`) - 2023/01/28: keep up with webui's updates, extra-networks rework - 2023/01/16: `v1.5` apply zero padding when condition length mismatch (issue #10: `RuntimeError: The size of tensor a (77) must match the size of tensor b (154) at non-singleton dimension 0`), typo in demo filename - 2023/01/12: `v1.4` keep up with webui's updates (issue #9: `AttributeError: 'FrozenCLIPEmbedderWithCustomWords' object has no attribute 'process_text'`) - 2022/12/13: `#bdd8bed` fixup no working when negative prompt is left empty (issue #6: `neg_prompts[-1] IndexError: List index out of range`) - 2022/11/27: `v1.2-fix2` keep up with webui's updates (error `ImportError: FrozenCLIPEmbedderWithCustomWords`) - 2022/11/20: `v1.2-fix1` keep up with webui's updates (error `AttributeError: p.all_negative_prompts[0]`) ⚠ this script will NOT probably support the schedule syntax (i.e.: `[prompt:prompt:number]`), because interpolate on mutable conditions requires sampler level tracing which is hard to maintain :( ⚠ this script will NOT probably work together with `hires.fix` due to some inner conceptual/logical conflict of `denoising_strength`, you can alternatively perform batch-upscale then batch-img2img. ### How it works? - input **multiple lines** in the prompt/negative-prompt box, each line is called a **stage** - generate images one by one, interpolating from one stage towards the next (batch configs are ignored) - gradually change the digested inputs between prompts - freeze all other settings (`steps`, `sampler`, `cfg factor`, `seed`, etc.) - note that only the major `seed` will be forcely fixed through all processes, you can still set `subseed = -1` to allow more variances - export a video! - follow [post-processing pipeline](#post-processing-pipeline) to get much better result 👌 ⚪ Txt2Img | sampler \ genesis | fixed | successive | embryo | | :-: | :-: | :-: | :-: | | Eular a | ![t2i-f-euler_a](img/t2i-f-euler_a.gif) | ![t2i-s-euler_a](img/t2i-s-euler_a.gif) | ![t2i-e-euler_a](img/t2i-e-euler_a.gif) | | DDIM | ![t2i-f-ddim](img/t2i-f-ddim.gif) | ![t2i-s-ddim](img/t2i-s-ddim.gif) | ![t2i-e-ddim](img/t2i-e-ddim.gif) | ⚪ Img2Img | sampler \ genesis | fixed | successive | embryo | | :-: | :-: | :-: | :-: | | Eular a | ![i2i-f-euler_a](img/i2i-f-euler_a.gif) | ![i2i-s-euler_a](img/i2i-s-euler_a.gif) | ![i2i-e-euler_a](img/i2i-e-euler_a.gif) | | DDIM | ![i2i-f-ddim](img/i2i-f-ddim.gif) | ![i2i-s-ddim](img/i2i-s-ddim.gif) | ![i2i-e-ddim](img/i2i-e-ddim.gif) | post-processing pipeline (case `i2i-f-ddim`): | w/o. post-processing | w/. post-processing | | :-: | :-: | | ![i2i-f-ddim](img/i2i-f-ddim.gif) | ![i2i-f-ddim-pp](img/i2i-f-ddim-pp.gif) | other stuff: | reference image for img2img | embryo image decoded <br/> case `i2i-e-euler_a` with `embryo_step=8` | | :-: | :-: | | ![i2i-ref](img/i2i-ref.png) | ![embryo](img/embryo.png) | ⚪ ControlNet support | prompt-travel with ControlNet (depth) | controlnet-travel (depth) | | :-: | :-: | | ![ctrlnet-ref](img/ctrlnet-ref.gif) | ![ctrlnet-depth](img/ctrlnet-depth.gif) | Example above run configure: ```text Prompt: (((masterpiece))), highres, ((boy)), child, cat ears, white hair, red eyes, yellow bell, red cloak, barefoot, angel, [flying], egyptian ((masterpiece)), highres, ((girl)), loli, cat ears, light blue hair, red eyes, magical wand, barefoot, [running] Negative prompt: (((nsfw))), ugly,duplicate,morbid,mutilated,tranny,trans,trannsexual,mutation,deformed,long neck,bad anatomy,bad proportions,extra arms,extra legs, disfigured,more than 2 nipples,malformed,mutated,hermaphrodite,out of frame,extra limbs,missing arms,missing legs,poorly drawn hands,poorty drawn face,mutation,poorly drawn,long body,multiple breasts,cloned face,gross proportions, mutated hands,bad hands,bad feet,long neck,missing limb,malformed limbs,malformed hands,fused fingers,too many fingers,extra fingers,missing fingers,extra digit,fewer digits,mutated hands and fingers,lowres,text,error,cropped,worst quality,low quality,normal quality,jpeg artifacts,signature,watermark,username,blurry,text font ufemale focus, poorly drawn, deformed, poorly drawn face, (extra leg:1.3), (extra fingers:1.2),out of frame Steps: 15 CFG scale: 7 Clip skip: 1 Seed: 114514 Size: 512 x 512 Model hash: animefull-final-pruned.ckpt Hypernet: (this is my secret :) ``` ### Options - prompt: (list of strings) - negative prompt: (list of strings) - input multiple lines of prompt text - we call each line of prompt a stage, usually you need at least 2 lines of text to starts travel - if len(positive_prompts) != len(negative_prompts), the shorter one's last item will be repeated to match the longer one - mode: (categorical) - `linear`: linear interpolation on condition/uncondition of CLIP output - `replace`: gradually replace of CLIP output - replace_dim: (categorical) - `token`: per token-wise vector - `channel`: per channel-wise vector - `random`: per point-wise element - replace_order: (categorical) - `similiar`: from the most similiar first (L1 distance) - `different`: from the most different first - `random`: just randomly - `embryo`: pre-denoise few steps, then hatch a set of image from the common embryo by linear interpolation - steps: (int, list of int) - number of images to interpolate between two stages - if int, constant number of travel steps - if list of int, length should match `len(stages)-1`, separate by comma, e.g.: `12, 24, 36` - genesis: (categorical), the a prior for each image frame - `fixed`: starts from pure noise in txt2img pipeline, or from the same ref-image given in img2img pipeline - `successive`: starts from the last generated image (this will force txt2img turn to actually be img2img from the 2nd frame on) - `embryo`: starts from the same half-denoised image, see [=> How does it work?](https://replicate.com/andreasjansson/stable-diffusion-animation#readme) - (experimental) it only processes 2 lines of prompts, and does not interpolate on negative_prompt :( - genesis_extra_params - denoise_strength: (float), denoise strength in img2img pipelines (for `successive`) - embryo_step: (int or float), steps to hatch the common embryo (for `embryo`) - if >= 1, taken as step cout - if < 1, taken as ratio of total step - video_* - fps: (float), FPS of video, set `0` to disable file saving - fmt: (categorical), export video file format - pad: (int), repeat beginning/ending frames, giving a in/out time - pick: (string), cherry pick frames by [python slice syntax](https://www.pythoncentral.io/how-to-slice-listsarrays-and-tuples-in-python) before padding (e.g.: set `::2` to get only even frames, set `:-1` to drop last frame) ### Installation Easiest way to install it is to: 1. Go to the "Extensions" tab in the webui, switch to the "Install from URL" tab 2. Paste https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel.git into "URL for extension's git repository" and click install 3. (Optional) You will need to restart the webui for dependencies to be installed or you won't be able to generate video files Manual install: 1. Copy this repo folder to the 'extensions' folder of https://github.com/AUTOMATIC1111/stable-diffusion-webui 2. (Optional) Restart the webui ### Post-processing pipeline There are still two steps away from a really smooth and high resolution animation, namely image **super-resolution** & video **frame interpolation** (see `third-party tools` below). ⚠ Media data processing is intrinsic resource-exhausting, and it's also not webui's work or duty, hence we separated it out. 😃 #### setup once ⚪ auto install (Windows) - run `cd tools & install.cmd` - trouble shooting - if you got any file system access errors like `Access denied.`, try run it again until you see `Done!` without errors 😂 - if you got SSL errors about `curl schannel ... Unknown error ... certificate.`, the downloader not work due to some SSL security reasons, just turn to install manually... - you will have four components: [Busybox](https://frippery.org/busybox/), [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan), [RIFE](https://github.com/nihui/rife-ncnn-vulkan) and [FFmpeg](https://ffmpeg.org/) installed under the [tools](tools) folder ⚪ manually install (Windows/Linux/Mac) ℹ Understand the `tools` folder layout first => [tools/README.txt](tools/README.txt) ℹ If you indeed wanna put the tools elsewhere, modify paths in [tools/link.cmd](tools/link.cmd) and run `cd tools & link.cmd` 😉 For Windows: - download [Busybox](https://frippery.org/busybox/) - download [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases) (e.g.: `realesrgan-ncnn-vulkan-20220424-windows.zip`) - (optional) download interesting seperated model checkpoints (e.g.: `realesr-animevideov3.pth`) - download [rife-ncnn-vulkan](https://github.com/nihui/rife-ncnn-vulkan/releases) bundle (e.g.: `rife-ncnn-vulkan-20221029-windows.zip `) - download [FFmpeg](https://ffmpeg.org/download.html) binary (e.g.: `ffmpeg-release-full-shared.7z` or `ffmpeg-git-full.7z`) For Linux/Mac: - download [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases) and [rife-ncnn-vulkan](https://github.com/nihui/rife-ncnn-vulkan/releases), put them according to the `tools` folder layout, manually apply `chmod 755` to the executables - `ffmpeg` can be easily found in your app store or package manager, run like `apt install ffmpeg`; DO NOT need to link it under `tools` folder #### run each time ⚪ tkinter GUI (Windows/Linux/Mac) ![manager](img/manager.png) For Windows: - run `manager.cmd`, to start webui's python venv - run the [DOSKEY](https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/doskey) `install` (only setup once) - run the DOSKEY `run` For Linux/Mac: - run `../../venv/Scripts/activate`, to start webui's python venv - run `pip install -r requirements.txt` (only setup once) - run `python manager.py` ℹ find usage help message in right click pop menu~ ⚪ <del> cmd script (Windows) - deprecated </del> - check params in [postprocess-config.cmd](postprocess-config.cmd) - pick one way to start 😃 - run `postprocess.cmd path/to/<image_folder>` from command line - drag & drop any image folder over `postprocess.cmd` icon - once processing finished, the explorer will be auto lauched to locate the generated file named with `synth.mp4` ### Related Projects ⚪ extensions that inspired this repo - sd-webui-controlnet (various image conditions): [https://github.com/Mikubill/sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet) - depth-image-io (custom depth2img): [https://github.com/AnonymousCervine/depth-image-io-for-SDWebui](https://github.com/AnonymousCervine/depth-image-io-for-SDWebui) - animator (img2img): [https://github.com/Animator-Anon/animator_extension](https://github.com/Animator-Anon/animator_extension) - sd-webui-riffusion (music gen): [https://github.com/enlyth/sd-webui-riffusion](https://github.com/enlyth/sd-webui-riffusion) - sd-animation (half denoise + FILM): - Github: [https://github.com/andreasjansson/cog-stable-diffusion](https://github.com/andreasjansson/cog-stable-diffusion) - Replicate: [https://replicate.com/andreasjansson/stable-diffusion-animation](https://replicate.com/andreasjansson/stable-diffusion-animation) - deforum (img2img + depth model): [https://github.com/deforum-art/deforum-for-automatic1111-webui](https://github.com/deforum-art/deforum-for-automatic1111-webui) - seed-travel (varying seed): [https://github.com/yownas/seed_travel](https://github.com/yownas/seed_travel) ⚪ third-party tools - image super-resoultion - ESRGAN: - ESRGAN: [https://github.com/xinntao/ESRGAN](https://github.com/xinntao/ESRGAN) - Real-ESRGAN: [https://github.com/xinntao/Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) - Real-ESRGAN-ncnn-vulkan (recommended): [https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan) - video frame interpolation - FILM (recommended): [https://github.com/google-research/frame-interpolation](https://github.com/google-research/frame-interpolation) - RIFE: - ECCV2022-RIFE: [https://github.com/megvii-research/ECCV2022-RIFE](https://github.com/megvii-research/ECCV2022-RIFE) - rife-ncnn-vulkan (recommended): [https://github.com/nihui/rife-ncnn-vulkan](https://github.com/nihui/rife-ncnn-vulkan) - Squirrel-RIFE: [https://github.com/Justin62628/Squirrel-RIFE](https://github.com/Justin62628/Squirrel-RIFE) - Practical-RIFE: [https://github.com/hzwer/Practical-RIFE](https://github.com/hzwer/Practical-RIFE) - GNU tool-kits - BusyBox: [https://www.busybox.net/](https://www.busybox.net/) - BusyBox for Windows: [https://frippery.org/busybox/](https://frippery.org/busybox/) - FFmpeg: [https://ffmpeg.org/](https://ffmpeg.org/) ⚪ my other experimental toy extensions - vid2vid (video2video) [https://github.com/Kahsolt/stable-diffusion-webui-vid2vid](https://github.com/Kahsolt/stable-diffusion-webui-vid2vid) - hires-fix-progressive (a progressive version of hires.fix): [https://github.com/Kahsolt/stable-diffusion-webui-hires-fix-progressive](https://github.com/Kahsolt/stable-diffusion-webui-hires-fix-progressive) - sonar (k_diffuison samplers): [https://github.com/Kahsolt/stable-diffusion-webui-sonar](https://github.com/Kahsolt/stable-diffusion-webui-sonar) - size-travel (kind of X-Y plot on image size): [https://github.com/Kahsolt/stable-diffusion-webui-size-travel](https://github.com/Kahsolt/stable-diffusion-webui-size-travel) ---- by Armit 2022/11/10
zhundred/ppo-LunarLander-v2
zhundred
2023-07-06T03:38:13Z
6
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T03:37:29Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.86 +/- 20.77 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
BaoKien/deberta-base-finetuned-squad-v2
BaoKien
2023-07-06T03:22:36Z
13
0
transformers
[ "transformers", "pytorch", "tensorboard", "deberta", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2023-07-06T01:19:43Z
--- license: mit tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: deberta-base-finetuned-squad-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-base-finetuned-squad-v2 This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.9221 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.753 | 1.0 | 8238 | 0.7286 | | 0.5378 | 2.0 | 16476 | 0.7578 | | 0.3881 | 3.0 | 24714 | 0.9221 | ### Performance - 'exact': 81.84115219405373 - 'f1': 85.19125695340612 - 'total': 11873 - 'HasAns_exact': 80.24628879892038 - 'HasAns_f1': 86.95610556811602 - 'HasAns_total': 5928 - 'NoAns_exact': 83.43145500420522 - 'NoAns_f1': 83.43145500420522 - 'NoAns_total': 5945 - 'best_exact': 81.84115219405373 - 'best_exact_thresh': 0.9994916319847107 - 'best_f1': 85.19125695340657 - 'best_f1_thresh': 0.9994916319847107 - 'total_time_in_seconds': 294.34524957099984 - 'samples_per_second': 40.33698528277447 - 'latency_in_seconds': 0.024791143735450168 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
KJan05/rl-CartPole-v1-unit4
KJan05
2023-07-06T03:21:57Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T03:21:45Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: rl-CartPole-v1-unit4 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
squeeze-ai-lab/sq-xgen-7b-8k-base-w4-s0
squeeze-ai-lab
2023-07-06T03:14:48Z
0
0
null
[ "arxiv:2306.07629", "region:us" ]
null
2023-07-05T23:31:51Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 4-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM. More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base). * **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research) * **Bitwidth:** 4-bit * **Sparsity Level:** 0% (dense-only) ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
squeeze-ai-lab/sq-xgen-7b-8k-base-w3-s0
squeeze-ai-lab
2023-07-06T03:14:31Z
0
0
null
[ "arxiv:2306.07629", "region:us" ]
null
2023-07-05T23:31:15Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 3-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM. More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base). * **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research) * **Bitwidth:** 3-bit * **Sparsity Level:** 0% (dense-only) ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
nimakha/ppo-Huggy
nimakha
2023-07-06T03:11:22Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-06T03:11:18Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: nimakha/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Bellaaazzzzz/models_fill
Bellaaazzzzz
2023-07-06T02:41:19Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-06T02:35:57Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet inference: true --- # controlnet-Bellaaazzzzz/models_fill These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images below. Validation result of 1 round. ![images_0_0)](./images_0_0.png) Validation result of 2 round. ![images_1_0)](./images_1_0.png)