modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-01 06:29:04
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
530 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-01 06:28:51
card
stringlengths
11
1.01M
mhakami/ppo-LunarLander-v2
mhakami
2023-05-30T17:23:28Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T17:23:05Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 257.69 +/- 20.43 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
artificialMoe/kanitalora
artificialMoe
2023-05-30T17:19:36Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-30T17:17:21Z
--- license: creativeml-openrail-m ---
lukasmoeller/mpt-7b-sail-ep1
lukasmoeller
2023-05-30T17:10:53Z
10
3
transformers
[ "transformers", "pytorch", "mpt", "text-generation", "Composer", "MosaicML", "llm-foundry", "StreamingDatasets", "custom_code", "dataset:mc4", "dataset:c4", "dataset:togethercomputer/RedPajama-Data-1T", "dataset:bigcode/the-stack", "dataset:allenai/s2orc", "dataset:lukasmoeller/sail_preprocessed", "arxiv:2305.15225", "arxiv:2108.12409", "arxiv:2302.13971", "arxiv:2205.14135", "arxiv:2010.04245", "arxiv:1909.08053", "arxiv:2302.06675", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-05-29T06:31:03Z
--- license: apache-2.0 tags: - Composer - MosaicML - llm-foundry - StreamingDatasets datasets: - mc4 - c4 - togethercomputer/RedPajama-Data-1T - bigcode/the-stack - allenai/s2orc - lukasmoeller/sail_preprocessed inference: false --- # MPT-7B SAIL This is a fine-tuned variant of MPT-7B, trained on the SAIL dataset (https://arxiv.org/abs/2305.15225). The preprocessed version can be found here: https://huggingface.co/datasets/lukasmoeller/sail_preprocessed I may have forgotten to add EOD tokens at the end of the target, might retrain if anyone is interested. # MPT-7B MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. This model was trained by [MosaicML](https://www.mosaicml.com). MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)). Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer). This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference. ### How is this model different? MPT-7B is * **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)). * **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)). * **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models). * **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)) * **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry) ### Models finetuned off MPT-7B: The following models are finetuned on MPT-7B: * [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths. Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3). At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens. We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b). * License: Apache 2.0 * [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following. Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. * License: _CC-By-SA-3.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct) * [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation. Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets. * License: _CC-By-NC-SA-4.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat) ## Model Date May 5, 2023 ## Model License Apache-2.0 ## Documentation * [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-1btms90mc-GipE2ufuPkKY0QBrmF3LSA)! ## How to Use This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`: ```python config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) config.attn_config['attn_impl'] = 'triton' model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', config=config, torch_dtype=torch.bfloat16, trust_remote_code=True ) model.to(device='cuda:0') ``` Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) config.update({"max_seq_len": 4096}) model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', config=config, trust_remote_code=True ) ``` This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") ``` ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 6.7B | |n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 50432 | | sequence length | 2048 | ## Training Data ### Streaming Datasets Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training. StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset. ### Data Mix The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix: | Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs | |-------------|----------------------------|------------|----------------------------|--------| | mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 | | C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 | | RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 | | The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 | | RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 | | The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 | | S2ORC | 48.85 B | 0.033 | 33 B | 0.68 | | RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 | | RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 | | RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 | Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length. The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics, most of which are relevant for tokenizing code: (1) It was trained on a diverse mix of data that includes code (The Pile) (2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces (3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters. The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points. ### Training Configuration This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-7B (Base) is **not** intended for deployment without finetuning. It should not be used for human-facing interactions without further guardrails and user consent. MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-7B: A New Standard for Open-Source, ly Usable LLMs}, year = {2023}, url = {www.mosaicml.com/blog/mpt-7b}, note = {Accessed: 2023-03-28}, % change this date urldate = {2023-03-28} % change this date } ```
VitaRin/ProtBert-BFD-IS
VitaRin
2023-05-30T16:56:23Z
105
0
transformers
[ "transformers", "pytorch", "tf", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-07T14:33:53Z
## ProtBert-BDF-IS ### Model Description ProtBert-BFD-IS is a a model fine-tuned on the pre-trained ProtBert-BFD model for the purpose of sequence classification. It takes a protein sequence input and predicts whether the protein is soluble or insoluble. ProtBert-BFD-IS has been fine-tuned using 3 different training datasets. **Finetuned from model:** Rostlab/prot_bert_bfd GitHub repository with relevant files: https://github.com/VitaRin/ProtBert-IS ## Uses It can be directly used with the pipeline on singular sequences: ``` from transformers import BertModel, BertTokenizer import re pipeline = TextClassificationPipeline( model=AutoModelForSequenceClassification.from_pretrained("VitaRin/ProtBert-IS"), tokenizer=AutoTokenizer.from_pretrained("VitaRin/ProtBert-IS"), device=0 ) sequence = "A E T C Z A O" sequence = re.sub(r"[UZOB]", "X", sequence) output = pipeline(sequence) ``` Or read multiple sequences from a .fasta file: ```from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline import re pipeline = TextClassificationPipeline( model=AutoModelForSequenceClassification.from_pretrained("VitaRin/ProtBert-IS"), tokenizer=AutoTokenizer.from_pretrained("VitaRin/ProtBert-IS"), device=0 ) with open("input.fasta", "r") as f: data = f.read().split(">") data.remove(data[0]) sequences = [] for d in data: d = d.split('\n', 1)[-1].replace('\n', '').replace('', ' ') sequences.append(d) sequences = [re.sub(r"[UZOB]", "X", sequence) for sequence in sequences] print(pipeline(sequences)) ```
VitaRin/ProtBert-IS
VitaRin
2023-05-30T16:54:55Z
3
0
transformers
[ "transformers", "pytorch", "tf", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-07T14:27:18Z
## ProtBert-IS ### Model Description ProtBert-IS is a a model fine-tuned on the pre-trained ProtBert model for the purpose of sequence classification. It takes a protein sequence input and predicts whether the protein is soluble or insoluble. ProtBert-IS has been fine-tuned using 3 different training datasets. **Finetuned from model:** Rostlab/prot_bert GitHub repository with relevant files: https://github.com/VitaRin/ProtBert-IS ## Uses It can be directly used with the pipeline on singular sequences: ``` from transformers import BertModel, BertTokenizer import re pipeline = TextClassificationPipeline( model=AutoModelForSequenceClassification.from_pretrained("VitaRin/ProtBert-IS"), tokenizer=AutoTokenizer.from_pretrained("VitaRin/ProtBert-IS"), device=0 ) sequence = "A E T C Z A O" sequence = re.sub(r"[UZOB]", "X", sequence) output = pipeline(sequence) ``` Or read multiple sequences from a .fasta file: ```from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline import re pipeline = TextClassificationPipeline( model=AutoModelForSequenceClassification.from_pretrained("VitaRin/ProtBert-IS"), tokenizer=AutoTokenizer.from_pretrained("VitaRin/ProtBert-IS"), device=0 ) with open("input.fasta", "r") as f: data = f.read().split(">") data.remove(data[0]) sequences = [] for d in data: d = d.split('\n', 1)[-1].replace('\n', '').replace('', ' ') sequences.append(d) sequences = [re.sub(r"[UZOB]", "X", sequence) for sequence in sequences] print(pipeline(sequences)) ```
vishakhpk/t5-11b-copoet
vishakhpk
2023-05-30T16:51:33Z
7
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "creativity", "creative writing", "poetry writing", "poems", "en", "arxiv:2210.13669", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-30T15:35:17Z
--- license: mit language: - en tags: - creativity - creative writing - poetry writing - poems pipeline_tag: text2text-generation --- ### Collaborative Poetry Writing with Instructions As part of our [work](https://arxiv.org/abs/2210.13669), we release our Instruction-tuned T5-11B model specifically aimed at instructions suited to poetry writing.<br> The expected model output is a single poetic sentence or verse in response to an instruction in natural language provided by a user. Here's an example of the collaborative writing process. <br> <img src="https://github.com/copoet-emnlp/copoet-emnlp.github.io/blob/main/images/CoPoet_Glass_Ceilings-1.png?raw=true" width="400"> The model was finetuned using a dataset of poetic sentences scraped from the internet and then paired to an instruction generated via templates. Training and validation data is shared on our [Github](https://github.com/vishakhpk/creative-instructions).<br> Here are some samples of instructions the model was trained on: <br> <img src="https://github.com/vishakhpk/vishakhpk.github.io/blob/master/assets/img/copoet-instructions.png?raw=true" width="400"> More details about the training and evaluation can be found in the [paper](https://arxiv.org/abs/2210.13669).<br> You can also see poems that were written with model help and the corresponding user interactions on our [website](https://copoet-emnlp.github.io ).
andrewjzhou/q-FrozenLake-v1-4x4-noSlippery
andrewjzhou
2023-05-30T16:44:12Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T16:44:09Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="andrewjzhou/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
zvcz/D
zvcz
2023-05-30T16:24:49Z
0
0
null
[ "region:us" ]
null
2023-05-25T18:27:14Z
# Project Name [Insert your project's name and a brief description of what it does] ## Installation [Add instructions on how to install and use your project, including any requirements or dependencies.] ## Usage [Add instructions on how to use and run your project.] ## Contributing [Add guidelines for contributing to the project, including information on how to submit issues or pull requests.] ## License [Choose a license for your project and include it here.] ## Credits [Add credits to anyone who contributed to your project, including open source libraries or resources that you used] ## Contact [Add your contact information here, such as your email or social media handles.] Feel free to customize this template to fit your project's specific needs!
rlanday/Pixelcopter-PLE-v0-try3
rlanday
2023-05-30T16:19:49Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T16:19:41Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter-PLE-v0-try3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 42.40 +/- 30.32 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
rlanday/Pixelcopter-PLE-v0-try2
rlanday
2023-05-30T16:18:23Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T16:18:14Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter-PLE-v0-try2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 54.40 +/- 50.04 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
rlanday/Reinforce-Pixelcopter-PLE-v0
rlanday
2023-05-30T16:13:30Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T16:12:50Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 54.70 +/- 55.65 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
hangeol/textual_inversion_cat_test
hangeol
2023-05-30T16:05:53Z
29
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-30T15:56:04Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - hangeol/textual_inversion_cat_test These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
YakovElm/Apache5Classic_Balance_DATA_ratio_2
YakovElm
2023-05-30T15:50:35Z
60
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-30T15:49:55Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache5Classic_Balance_DATA_ratio_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache5Classic_Balance_DATA_ratio_2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5624 - Train Accuracy: 0.7304 - Validation Loss: 0.5657 - Validation Accuracy: 0.7135 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6344 | 0.6639 | 0.5897 | 0.6926 | 0 | | 0.6169 | 0.6854 | 0.5808 | 0.6964 | 1 | | 0.5624 | 0.7304 | 0.5657 | 0.7135 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
paku90/bert-emotion
paku90
2023-05-30T15:48:07Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-30T15:32:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - precision - recall model-index: - name: bert-emotion results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: emotion split: validation args: emotion metrics: - name: Precision type: precision value: 0.7505623807659564 - name: Recall type: recall value: 0.7243031825553111 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.1413 - Precision: 0.7506 - Recall: 0.7243 - Fscore: 0.7340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.8556 | 1.0 | 815 | 0.7854 | 0.7461 | 0.5929 | 0.6088 | | 0.5369 | 2.0 | 1630 | 0.9014 | 0.7549 | 0.7278 | 0.7359 | | 0.2571 | 3.0 | 2445 | 1.1413 | 0.7506 | 0.7243 | 0.7340 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
hangeol/textual_inversion_cat_original
hangeol
2023-05-30T15:39:46Z
29
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-30T15:33:40Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - hangeol/textual_inversion_cat_original These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
paom/convnext-tiny-224-finetuned-eurosat-albumentations
paom
2023-05-30T15:37:49Z
195
0
transformers
[ "transformers", "pytorch", "tensorboard", "convnext", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-30T14:49:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: convnext-tiny-224-finetuned-eurosat-albumentations results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-tiny-224-finetuned-eurosat-albumentations This model is a fine-tuned version of [paom/convnext-tiny-224-finetuned-eurosat-albumentations](https://huggingface.co/paom/convnext-tiny-224-finetuned-eurosat-albumentations) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6948 - Accuracy: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.6948 | 0.0 | | No log | 2.0 | 2 | 0.5057 | 0.0 | | No log | 3.0 | 3 | 0.2286 | 0.0 | | No log | 4.0 | 4 | 0.0823 | 0.0 | | No log | 5.0 | 5 | 0.0320 | 0.0 | | No log | 6.0 | 6 | 0.0489 | 0.0 | | No log | 7.0 | 7 | 0.0881 | 0.0 | | No log | 8.0 | 8 | 0.1134 | 0.0 | | No log | 9.0 | 9 | 0.1179 | 0.0 | | 0.0638 | 10.0 | 10 | 0.1054 | 0.0 | | 0.0638 | 11.0 | 11 | 0.0826 | 0.0 | | 0.0638 | 12.0 | 12 | 0.0587 | 0.0 | | 0.0638 | 13.0 | 13 | 0.0386 | 0.0 | | 0.0638 | 14.0 | 14 | 0.0241 | 0.0 | | 0.0638 | 15.0 | 15 | 0.0158 | 0.0 | | 0.0638 | 16.0 | 16 | 0.0115 | 0.0 | | 0.0638 | 17.0 | 17 | 0.0096 | 0.0 | | 0.0638 | 18.0 | 18 | 0.0087 | 0.0 | | 0.0638 | 19.0 | 19 | 0.0084 | 0.0 | | 0.0048 | 20.0 | 20 | 0.0083 | 0.0 | ### Framework versions - Transformers 4.29.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
jsilver/bert-emotion
jsilver
2023-05-30T15:33:48Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-30T15:27:23Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - precision - recall model-index: - name: bert-emotion results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: emotion split: validation args: emotion metrics: - name: Precision type: precision value: 0.7505623807659564 - name: Recall type: recall value: 0.7243031825553111 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.1413 - Precision: 0.7506 - Recall: 0.7243 - Fscore: 0.7340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.8556 | 1.0 | 815 | 0.7854 | 0.7461 | 0.5929 | 0.6088 | | 0.5369 | 2.0 | 1630 | 0.9014 | 0.7549 | 0.7278 | 0.7359 | | 0.2571 | 3.0 | 2445 | 1.1413 | 0.7506 | 0.7243 | 0.7340 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
YakovElm/Apache5Classic_Balance_DATA_ratio_1
YakovElm
2023-05-30T15:29:10Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-30T15:28:06Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache5Classic_Balance_DATA_ratio_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache5Classic_Balance_DATA_ratio_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6407 - Train Accuracy: 0.6296 - Validation Loss: 0.6324 - Validation Accuracy: 0.6382 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.6980 | 0.5166 | 0.6806 | 0.5641 | 0 | | 0.6895 | 0.5470 | 0.6698 | 0.5755 | 1 | | 0.6407 | 0.6296 | 0.6324 | 0.6382 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
shujunge/vit-base-patch16-224-in21k-finetuned-lora-food101
shujunge
2023-05-30T15:27:38Z
0
0
null
[ "pytorch", "tensorboard", "generated_from_trainer", "dataset:food101", "license:apache-2.0", "region:us" ]
null
2023-05-30T15:22:57Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - food101 metrics: - accuracy model-index: - name: vit-base-patch16-224-in21k-finetuned-lora-food101 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetuned-lora-food101 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 4.0652 - Accuracy: 0.74 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 9 | 4.2354 | 0.64 | | 4.4352 | 2.0 | 18 | 4.0652 | 0.74 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
nolanaatama/tnfy
nolanaatama
2023-05-30T15:27:12Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-30T15:22:33Z
--- license: creativeml-openrail-m ---
casals90/CartPole-v1
casals90
2023-05-30T15:10:42Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T15:10:23Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Sergio10IA/roberta-base-bne-finetuned-sqac
Sergio10IA
2023-05-30T15:06:24Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:sqac", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-05-23T14:40:52Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - sqac model-index: - name: roberta-base-bne-finetuned-sqac results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-sqac This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the sqac dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
TimVG/ppo-LunarLander-v2
TimVG
2023-05-30T15:04:10Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T15:03:48Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 261.70 +/- 21.75 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
roykim/ko_chat
roykim
2023-05-30T15:04:03Z
45
1
transformers
[ "transformers", "pytorch", "chatting", "conversational", "ko", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2023-05-28T10:53:52Z
--- license: apache-2.0 language: - ko library_name: transformers pipeline_tag: conversational tags: - chatting --- This made for korean chatting model. ```Python import os import sys import fire # import gradio as gr import torch import transformers from peft import PeftModel from transformers import GenerationConfig, LlamaForCausalLM, LlamaTokenizer from utils.callbacks import Iteratorize, Stream from utils.prompter import Prompter if torch.cuda.is_available(): device = "cuda" else: device = "cpu" try: if torch.backends.mps.is_available(): device = "mps" except: # noqa: E722 pass def main( load_8bit: bool = False, base_model: str = "", lora_weights: str = "tloen/alpaca-lora-7b", prompt_template: str = "", # The prompt template to use, will default to alpaca. server_name: str = "0.0.0.0", # Allows to listen on all interfaces by providing '0. share_gradio: bool = False, ): base_model = base_model or os.environ.get("BASE_MODEL", "") assert ( base_model ), "Please specify a --base_model, e.g. --base_model='huggyllama/llama-7b'" prompter = Prompter(prompt_template) tokenizer = LlamaTokenizer.from_pretrained(base_model) if device == "cuda": model = LlamaForCausalLM.from_pretrained( base_model, load_in_8bit=load_8bit, torch_dtype=torch.float16, device_map="auto", ) model = PeftModel.from_pretrained( model, lora_weights, torch_dtype=torch.float16, ) elif device == "mps": model = LlamaForCausalLM.from_pretrained( base_model, device_map={"": device}, torch_dtype=torch.float16, ) model = PeftModel.from_pretrained( model, lora_weights, device_map={"": device}, torch_dtype=torch.float16, ) else: model = LlamaForCausalLM.from_pretrained( base_model, device_map={"": device}, low_cpu_mem_usage=True ) model = PeftModel.from_pretrained( model, lora_weights, device_map={"": device}, ) # unwind broken decapoda-research config model.config.pad_token_id = tokenizer.pad_token_id = 0 # unk model.config.bos_token_id = 1 model.config.eos_token_id = 2 if not load_8bit: model.half() # seems to fix bugs for some users. model.eval() if torch.__version__ >= "2" and sys.platform != "win32": model = torch.compile(model) def evaluate( instruction, input=None, temperature=0.1, top_p=0.75, top_k=40, num_beams=4, max_new_tokens=256, repetition_penalty=4.8, stream_output=False, **kwargs, ): prompt = prompter.generate_prompt(instruction, input) inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].to(device) generation_config = GenerationConfig( temperature=temperature, top_p=top_p, top_k=top_k, num_beams=num_beams, repetition_penalty=float(repetition_penalty), **kwargs, ) generate_params = { "input_ids": input_ids, "generation_config": generation_config, "return_dict_in_generate": True, "output_scores": True, "max_new_tokens": max_new_tokens, } if stream_output: # Stream the reply 1 token at a time. # This is based on the trick of using 'stopping_criteria' to create an iterator, # from https://github.com/oobabooga/text-generation-webui/blob/ad37f396fc8bcbab90e11ecf17c56c97bfbd4a9c/modules/text_generation.py#L216-L243. def generate_with_callback(callback=None, **kwargs): kwargs.setdefault( "stopping_criteria", transformers.StoppingCriteriaList() ) kwargs["stopping_criteria"].append( Stream(callback_func=callback) ) with torch.no_grad(): model.generate(**kwargs) def generate_with_streaming(**kwargs): return Iteratorize( generate_with_callback, kwargs, callback=None ) with generate_with_streaming(**generate_params) as generator: for output in generator: # new_tokens = len(output) - len(input_ids[0]) decoded_output = tokenizer.decode(output) if output[-1] in [tokenizer.eos_token_id]: break yield prompter.get_response(decoded_output) return # early return for stream_output # Without streaming with torch.no_grad(): generation_output = model.generate( input_ids=input_ids, generation_config=generation_config, return_dict_in_generate=True, output_scores=True, max_new_tokens=max_new_tokens, ) s = generation_output.sequences[0] output = tokenizer.decode(s) yield prompter.get_response(output) # testing code for readme for instruction in [ "Tell me about alpacas.", "Tell me about the president of Mexico in 2019.", "Tell me about the king of France in 2019.", "List all Canadian provinces in alphabetical order.", "Write a Python program that prints the first 10 Fibonacci numbers.", "Write a program that prints the numbers from 1 to 100. But for multiples of three print 'Fizz' instead of the number and for the multiples of five print 'Buzz'. For numbers which are multiples of both three and five print 'FizzBuzz'.", # noqa: E501 "Tell me five words that rhyme with 'shock'.", "Translate the sentence 'I have no mouth but I must scream' into Spanish.", "Count up from 1 to 500.", ]: print("Instruction:", instruction) print("Response:", evaluate(instruction)) print() if __name__ == "__main__": fire.Fire(main) ```
YakovElm/Hyperledger20Classic_512
YakovElm
2023-05-30T14:31:34Z
62
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-30T14:30:55Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Hyperledger20Classic_512 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Hyperledger20Classic_512 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2642 - Train Accuracy: 0.9149 - Validation Loss: 0.2898 - Validation Accuracy: 0.8983 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3104 | 0.9035 | 0.3020 | 0.8983 | 0 | | 0.2724 | 0.9149 | 0.2950 | 0.8983 | 1 | | 0.2642 | 0.9149 | 0.2898 | 0.8983 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
CleverShovel/open-llama-0.3T-7B-open-instruct-v1.1-sharded-bf16
CleverShovel
2023-05-30T14:31:09Z
6
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-30T14:07:04Z
Sharded version of [VMware/open-llama-0.3T-7B-open-instruct-v1.1](https://huggingface.co/VMware/open-llama-0.3T-7B-open-instruct-v1.1?text=Hi.). Can be loaded in free tier Colab.
Anahi97/Gym-time
Anahi97
2023-05-30T14:20:09Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-05-30T14:20:09Z
--- license: bigscience-openrail-m ---
ChristianMDahl/segFormer-b3-vertical
ChristianMDahl
2023-05-30T13:42:44Z
1
0
transformers
[ "transformers", "tf", "segformer", "generated_from_keras_callback", "license:other", "endpoints_compatible", "region:us" ]
null
2023-05-28T13:08:52Z
--- license: other tags: - generated_from_keras_callback model-index: - name: ChristianMDahl/segFormer-b3-vertical results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ChristianMDahl/segFormer-b3-vertical This model is a fine-tuned version of [nvidia/mit-b3](https://huggingface.co/nvidia/mit-b3) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1059 - Validation Loss: 0.1350 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 6e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1926 | 0.1873 | 0 | | 0.1677 | 0.1612 | 1 | | 0.1578 | 0.1562 | 2 | | 0.1509 | 0.1521 | 3 | | 0.1451 | 0.1480 | 4 | | 0.1395 | 0.1458 | 5 | | 0.1343 | 0.1431 | 6 | | 0.1314 | 0.1433 | 7 | | 0.1274 | 0.1395 | 8 | | 0.1247 | 0.1377 | 9 | | 0.1218 | 0.1361 | 10 | | 0.1199 | 0.1374 | 11 | | 0.1173 | 0.1350 | 12 | | 0.1153 | 0.1376 | 13 | | 0.1138 | 0.1395 | 14 | | 0.1117 | 0.1357 | 15 | | 0.1102 | 0.1353 | 16 | | 0.1089 | 0.1397 | 17 | | 0.1082 | 0.1364 | 18 | | 0.1059 | 0.1350 | 19 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.10.1 - Datasets 2.12.0 - Tokenizers 0.13.0.dev0
Akhilsplendid/bart-1
Akhilsplendid
2023-05-30T13:41:50Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-14T14:18:31Z
--- license: mit tags: - generated_from_trainer model-index: - name: bart-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-1 This model is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
BADeid/Nature_Project
BADeid
2023-05-30T13:32:40Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-30T13:25:51Z
--- license: creativeml-openrail-m ---
sl-alex/mosaicml-mpt-7b-chat-qlora
sl-alex
2023-05-30T13:29:14Z
11
0
transformers
[ "transformers", "pytorch", "mpt", "text-generation", "Composer", "MosaicML", "llm-foundry", "custom_code", "dataset:jeffwan/sharegpt_vicuna", "dataset:Hello-SimpleAI/HC3", "dataset:tatsu-lab/alpaca", "dataset:Anthropic/hh-rlhf", "dataset:victor123/evol_instruct_70k", "arxiv:2205.14135", "arxiv:2108.12409", "arxiv:2010.04245", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-12T16:40:22Z
--- license: cc-by-nc-sa-4.0 datasets: - jeffwan/sharegpt_vicuna - Hello-SimpleAI/HC3 - tatsu-lab/alpaca - Anthropic/hh-rlhf - victor123/evol_instruct_70k tags: - Composer - MosaicML - llm-foundry inference: false --- # MPT-7B-Chat MPT-7B-Chat is a chatbot-like model for dialogue generation. It was built by finetuning [MPT-7B](https://huggingface.co/spaces/mosaicml/mpt-7b) on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets. * License: _CC-By-NC-SA-4.0_ (non-commercial use only) * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat) This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture. ## Model Date May 5, 2023 ## Model License _CC-By-NC-SA-4.0_ (non-commercial use only) ## Documentation * [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-w0tiddn9-WGTlRpfjcO9J5jyrMub1dg)! ### Example Dialogue **SamIAm85**: >I want you to come up with a tweet based on this summary of the article: >"Introducing MPT-7B, the latest entry in our MosaicML Foundation Series. >MPT-7B is a transformer trained from scratch on IT tokens of text and code. >It is open source and it matches the quality of LLaMA-7B. >MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k. >Starting today, you can train, finetune, and deploy your own private MPT models, either starting from one of our checkpoints or training from scratch. >For inspiration, we are also releasing three finetuned models in addition to the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-Story Writer-65k+, the last of which uses a context length of 65k tokens! **MPT-7B-Instruct**: >Here's my suggestion for a tweet: >"Exciting news! @MosaicML has launched #MPT_7B - a new standard for open-source LLMs. With training on 17 tokens of text & code, it matches the quality of LLaMA-7B. Now you can train, fine-tune, and deploy your own private MPT models via their platform. Check out the full blog post here: https://bit.ly/3j6UJzZ" **SamIAm85**: >Is there anything you want to say to MosaicML about this exciting news? **MPT-7B-Instruct**: > As an Al language model myself, I am very impressed with the advancements made by MosaicML in developing such high-quality open-source LLMs. > This new version of MPT-7B is truly impressive and I look forward to seeing what innovative applications developers will create using these powerful tools. > Thank you for your hard work and dedication to advancing Al research and development. ## How to Use This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b-chat', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`: ```python config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b-chat', trust_remote_code=True ) config.attn_config['attn_impl'] = 'triton' model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b-chat', config=config, torch_dtype=torch.bfloat16, trust_remote_code=True ) model.to(device='cuda:0') ``` Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b-chat', trust_remote_code=True ) config.update({"max_seq_len": 4096}) model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b-chat', config=config, trust_remote_code=True ) ``` This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") ``` ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 6.7B | |n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 50432 | | sequence length | 2048 | ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-7B-Chat can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B-Chat was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## Acknowledgements This model was finetuned by Sam Havens and the MosaicML NLP team ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b). ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-7B: A New Standard for Open-Source, ly Usable LLMs}, year = {2023}, url = {www.mosaicml.com/blog/mpt-7b}, note = {Accessed: 2023-03-28}, % change this date urldate = {2023-03-28} % change this date } ```
Joocheol/gpt2-wikitext2
Joocheol
2023-05-30T13:27:27Z
163
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-30T12:32:22Z
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.1132 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.5581 | 1.0 | 2249 | 6.4702 | | 6.192 | 2.0 | 4498 | 6.2018 | | 6.0168 | 3.0 | 6747 | 6.1132 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
QuickSilver007/rlv2unit7_cpu1_poca-SoccerTwos
QuickSilver007
2023-05-30T13:05:44Z
6
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "license:mit", "region:us" ]
reinforcement-learning
2023-05-30T13:00:28Z
--- license: mit tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: Agog/Impatient 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
trustvare/TrustVare-OLM-Converter-Software
trustvare
2023-05-30T12:55:32Z
0
0
null
[ "region:us" ]
null
2023-05-30T12:51:36Z
Users may convert OLM files to other formats, including PST, EML, and MBOX, using the OLM Converter Tool software application. Microsoft Outlook for Mac uses OLM files, whereas other email clients like Thunderbird, Apple Mail, and Microsoft Outlook for Windows utilize PST, EML, and MBOX files. Users who wish to transition from Outlook for Mac to another email client that does not support OLM files will find the program to be helpful. While maintaining the integrity of the data, the utility guarantees a dependable and secure conversion of files. To guarantee a successful data transfer, it's crucial to use an efficient and trustworthy OLM conversion tool. Users may also make use of the software's free demo version to learn more about its features and capabilities. Read More:- https://www.trustvare.com/olm/
zhanghwei/unit4
zhanghwei
2023-05-30T12:44:05Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T12:43:54Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: unit4 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 454.30 +/- 137.10 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
GeneZC/sparsebert-base
GeneZC
2023-05-30T12:35:14Z
33
0
transformers
[ "transformers", "pytorch", "bert", "dataset:wikipedia", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-05-30T12:31:02Z
--- license: apache-2.0 datasets: - wikipedia --- # Model details `sparsebert-base` sparsified from `bert-base-uncased` on `Wikipedia`.
emresvd/u156
emresvd
2023-05-30T12:26:27Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-05-30T12:26:04Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
haonan-li/bactrian-cs-llama-7b-lora
haonan-li
2023-05-30T12:24:31Z
0
0
null
[ "arxiv:2305.15011", "license:mit", "region:us" ]
null
2023-05-30T12:24:16Z
--- license: mit --- This repo contains a low-rank adapter (LoRA) for LLaMA-7b fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca) and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Czech. ### Dataset Creation 1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data). 2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023). 3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023). <h3 align="center"> <img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center"> </h3> ### Training Parameters The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora). This version of the weights was trained with the following hyperparameters: - Epochs: 8 - Batch size: 128 - Cutoff length: 512 - Learning rate: 3e-4 - Lora _r_: 16 - Lora target modules: q_proj, v_proj, That is: ``` python finetune.py \ --base_model='decapoda-research/llama-7b-hf' \ --num_epochs=8 \ --cutoff_len=1024 \ --group_by_length \ --output_dir='./bactrian-cs-7b-lora' \ --lora_target_modules='[q_proj,v_proj]' \ --lora_r=16 \ --micro_batch_size=32 ``` Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X. ### Discussion of Biases (1) Translation bias; (2) Potential English-culture bias in the translated dataset. ### Citation Information ``` @misc{li2023bactrianx, title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation}, author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin}, year={2023}, eprint={2305.15011}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
haonan-li/bactrian-uk-llama-7b-lora
haonan-li
2023-05-30T12:23:54Z
0
0
null
[ "arxiv:2305.15011", "license:mit", "region:us" ]
null
2023-05-30T12:23:11Z
--- license: mit --- This repo contains a low-rank adapter (LoRA) for LLaMA-7b fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca) and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Ukrainian. ### Dataset Creation 1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data). 2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023). 3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023). <h3 align="center"> <img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center"> </h3> ### Training Parameters The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora). This version of the weights was trained with the following hyperparameters: - Epochs: 8 - Batch size: 128 - Cutoff length: 512 - Learning rate: 3e-4 - Lora _r_: 16 - Lora target modules: q_proj, v_proj, That is: ``` python finetune.py \ --base_model='decapoda-research/llama-7b-hf' \ --num_epochs=8 \ --cutoff_len=1024 \ --group_by_length \ --output_dir='./bactrian-uk-7b-lora' \ --lora_target_modules='[q_proj,v_proj]' \ --lora_r=16 \ --micro_batch_size=32 ``` Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X. ### Discussion of Biases (1) Translation bias; (2) Potential English-culture bias in the translated dataset. ### Citation Information ``` @misc{li2023bactrianx, title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation}, author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin}, year={2023}, eprint={2305.15011}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
haonan-li/bactrian-sv-llama-7b-lora
haonan-li
2023-05-30T12:23:11Z
0
0
null
[ "arxiv:2305.15011", "license:mit", "region:us" ]
null
2023-05-30T12:22:26Z
--- license: mit --- This repo contains a low-rank adapter (LoRA) for LLaMA-7b fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca) and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Swedish. ### Dataset Creation 1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data). 2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023). 3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023). <h3 align="center"> <img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center"> </h3> ### Training Parameters The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora). This version of the weights was trained with the following hyperparameters: - Epochs: 8 - Batch size: 128 - Cutoff length: 512 - Learning rate: 3e-4 - Lora _r_: 16 - Lora target modules: q_proj, v_proj, That is: ``` python finetune.py \ --base_model='decapoda-research/llama-7b-hf' \ --num_epochs=8 \ --cutoff_len=1024 \ --group_by_length \ --output_dir='./bactrian-sv-7b-lora' \ --lora_target_modules='[q_proj,v_proj]' \ --lora_r=16 \ --micro_batch_size=32 ``` Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X. ### Discussion of Biases (1) Translation bias; (2) Potential English-culture bias in the translated dataset. ### Citation Information ``` @misc{li2023bactrianx, title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation}, author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin}, year={2023}, eprint={2305.15011}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
haonan-li/bactrian-sl-llama-7b-lora
haonan-li
2023-05-30T12:22:26Z
0
0
null
[ "arxiv:2305.15011", "license:mit", "region:us" ]
null
2023-05-30T12:21:56Z
--- license: mit --- This repo contains a low-rank adapter (LoRA) for LLaMA-7b fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca) and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Slovenian. ### Dataset Creation 1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data). 2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023). 3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023). <h3 align="center"> <img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center"> </h3> ### Training Parameters The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora). This version of the weights was trained with the following hyperparameters: - Epochs: 8 - Batch size: 128 - Cutoff length: 512 - Learning rate: 3e-4 - Lora _r_: 16 - Lora target modules: q_proj, v_proj, That is: ``` python finetune.py \ --base_model='decapoda-research/llama-7b-hf' \ --num_epochs=8 \ --cutoff_len=1024 \ --group_by_length \ --output_dir='./bactrian-sl-7b-lora' \ --lora_target_modules='[q_proj,v_proj]' \ --lora_r=16 \ --micro_batch_size=32 ``` Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X. ### Discussion of Biases (1) Translation bias; (2) Potential English-culture bias in the translated dataset. ### Citation Information ``` @misc{li2023bactrianx, title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation}, author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin}, year={2023}, eprint={2305.15011}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
asti339/ki2
asti339
2023-05-30T12:22:20Z
1
0
tf-keras
[ "tf-keras", "image-classification", "region:us" ]
image-classification
2023-05-30T12:20:55Z
--- pipeline_tag: image-classification ---
haonan-li/bactrian-ro-llama-7b-lora
haonan-li
2023-05-30T12:21:07Z
0
0
null
[ "arxiv:2305.15011", "license:mit", "region:us" ]
null
2023-05-30T12:20:30Z
--- license: mit --- This repo contains a low-rank adapter (LoRA) for LLaMA-7b fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca) and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Romanian. ### Dataset Creation 1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data). 2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023). 3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023). <h3 align="center"> <img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center"> </h3> ### Training Parameters The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora). This version of the weights was trained with the following hyperparameters: - Epochs: 8 - Batch size: 128 - Cutoff length: 512 - Learning rate: 3e-4 - Lora _r_: 16 - Lora target modules: q_proj, v_proj, That is: ``` python finetune.py \ --base_model='decapoda-research/llama-7b-hf' \ --num_epochs=8 \ --cutoff_len=1024 \ --group_by_length \ --output_dir='./bactrian-ro-7b-lora' \ --lora_target_modules='[q_proj,v_proj]' \ --lora_r=16 \ --micro_batch_size=32 ``` Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X. ### Discussion of Biases (1) Translation bias; (2) Potential English-culture bias in the translated dataset. ### Citation Information ``` @misc{li2023bactrianx, title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation}, author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin}, year={2023}, eprint={2305.15011}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
lewtun/test-tgi-main
lewtun
2023-05-30T12:21:06Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-30T12:20:47Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-ift results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-ift This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 44.2744 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 41.4703 | 0.0 | 1 | 44.2744 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
haonan-li/bactrian-pt-llama-7b-lora
haonan-li
2023-05-30T12:20:29Z
0
0
null
[ "arxiv:2305.15011", "license:mit", "region:us" ]
null
2023-05-30T12:19:42Z
--- license: mit --- This repo contains a low-rank adapter (LoRA) for LLaMA-7b fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca) and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Portuguese. ### Dataset Creation 1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data). 2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023). 3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023). <h3 align="center"> <img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center"> </h3> ### Training Parameters The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora). This version of the weights was trained with the following hyperparameters: - Epochs: 8 - Batch size: 128 - Cutoff length: 512 - Learning rate: 3e-4 - Lora _r_: 16 - Lora target modules: q_proj, v_proj, That is: ``` python finetune.py \ --base_model='decapoda-research/llama-7b-hf' \ --num_epochs=8 \ --cutoff_len=1024 \ --group_by_length \ --output_dir='./bactrian-pt-7b-lora' \ --lora_target_modules='[q_proj,v_proj]' \ --lora_r=16 \ --micro_batch_size=32 ``` Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X. ### Discussion of Biases (1) Translation bias; (2) Potential English-culture bias in the translated dataset. ### Citation Information ``` @misc{li2023bactrianx, title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation}, author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin}, year={2023}, eprint={2305.15011}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
haonan-li/bactrian-fr-llama-7b-lora
haonan-li
2023-05-30T12:16:45Z
0
0
null
[ "arxiv:2305.15011", "license:mit", "region:us" ]
null
2023-05-30T12:15:55Z
--- license: mit --- This repo contains a low-rank adapter (LoRA) for LLaMA-7b fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca) and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in French. ### Dataset Creation 1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data). 2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023). 3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023). <h3 align="center"> <img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center"> </h3> ### Training Parameters The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora). This version of the weights was trained with the following hyperparameters: - Epochs: 8 - Batch size: 128 - Cutoff length: 512 - Learning rate: 3e-4 - Lora _r_: 16 - Lora target modules: q_proj, v_proj, That is: ``` python finetune.py \ --base_model='decapoda-research/llama-7b-hf' \ --num_epochs=8 \ --cutoff_len=1024 \ --group_by_length \ --output_dir='./bactrian-fr-7b-lora' \ --lora_target_modules='[q_proj,v_proj]' \ --lora_r=16 \ --micro_batch_size=32 ``` Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X. ### Discussion of Biases (1) Translation bias; (2) Potential English-culture bias in the translated dataset. ### Citation Information ``` @misc{li2023bactrianx, title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation}, author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin}, year={2023}, eprint={2305.15011}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
haonan-li/bactrian-en-llama-7b-lora
haonan-li
2023-05-30T12:15:07Z
0
0
null
[ "arxiv:2305.15011", "license:mit", "region:us" ]
null
2023-05-30T12:14:21Z
--- license: mit --- This repo contains a low-rank adapter (LoRA) for LLaMA-7b fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca) and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in English. ### Dataset Creation 1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data). 2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023). 3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023). <h3 align="center"> <img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center"> </h3> ### Training Parameters The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora). This version of the weights was trained with the following hyperparameters: - Epochs: 8 - Batch size: 128 - Cutoff length: 512 - Learning rate: 3e-4 - Lora _r_: 16 - Lora target modules: q_proj, v_proj, That is: ``` python finetune.py \ --base_model='decapoda-research/llama-7b-hf' \ --num_epochs=8 \ --cutoff_len=1024 \ --group_by_length \ --output_dir='./bactrian-en-7b-lora' \ --lora_target_modules='[q_proj,v_proj]' \ --lora_r=16 \ --micro_batch_size=32 ``` Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X. ### Discussion of Biases (1) Translation bias; (2) Potential English-culture bias in the translated dataset. ### Citation Information ``` @misc{li2023bactrianx, title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation}, author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin}, year={2023}, eprint={2305.15011}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
haonan-li/bactrian-de-llama-7b-lora
haonan-li
2023-05-30T12:14:21Z
0
0
null
[ "arxiv:2305.15011", "license:mit", "region:us" ]
null
2023-05-30T12:13:55Z
--- license: mit --- This repo contains a low-rank adapter (LoRA) for LLaMA-7b fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca) and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in German. ### Dataset Creation 1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data). 2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023). 3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023). <h3 align="center"> <img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center"> </h3> ### Training Parameters The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora). This version of the weights was trained with the following hyperparameters: - Epochs: 8 - Batch size: 128 - Cutoff length: 512 - Learning rate: 3e-4 - Lora _r_: 16 - Lora target modules: q_proj, v_proj, That is: ``` python finetune.py \ --base_model='decapoda-research/llama-7b-hf' \ --num_epochs=8 \ --cutoff_len=1024 \ --group_by_length \ --output_dir='./bactrian-de-7b-lora' \ --lora_target_modules='[q_proj,v_proj]' \ --lora_r=16 \ --micro_batch_size=32 ``` Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X. ### Discussion of Biases (1) Translation bias; (2) Potential English-culture bias in the translated dataset. ### Citation Information ``` @misc{li2023bactrianx, title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation}, author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin}, year={2023}, eprint={2305.15011}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
OFA-Sys/expertllama-7b-delta
OFA-Sys
2023-05-30T12:13:12Z
12
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2305.14688", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-30T10:26:57Z
--- license: cc-by-nc-4.0 --- Please refer to our github repo and paper. https://github.com/OFA-Sys/ExpertLLaMA [ExpertPrompting: Instructing Large Language Models to be Distinguished Experts](https://arxiv.org/abs/2305.14688)
theodotus/tts_uk_fastpitch
theodotus
2023-05-30T12:12:13Z
6
2
nemo
[ "nemo", "arxiv:2006.06873", "arxiv:2108.10447", "license:mit", "region:us" ]
null
2023-05-29T11:37:58Z
--- license: mit --- # NVIDIA FastPitch (uk-UA) <style> img { display: inline; } </style> | [![Model architecture](https://img.shields.io/badge/Model_Arch-FastPitch--Transformer-lightgrey#model-badge)](#model-architecture) | [![Model size](https://img.shields.io/badge/Params-45M-lightgrey#model-badge)](#model-architecture) | [![Language](https://img.shields.io/badge/Language-uk--UA-lightgrey#model-badge)](#datasets)| FastPitch [1] is a fully-parallel transformer architecture with prosody control over pitch and individual phoneme duration. Additionally, it uses an unsupervised speech-text aligner [2]. See the [model architecture](#model-architecture) section for complete architecture details. ## Usage The model is available for use in the NeMo toolkit [3] and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset. To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed the latest PyTorch version. ``` pip install nemo_toolkit['all'] ``` ### Automatically instantiate the model Note: This model generates only spectrograms and a vocoder is needed to convert the spectrograms to waveforms. In this example HiFiGAN is used. ```python # Load Tokenizer from huggingface_hub import hf_hub_download hf_hub_download( repo_id="theodotus/tts_uk_fastpitch", filename="tokenizer.py", local_dir = "./" ) # Load FastPitch from nemo.collections.tts.models import FastPitchModel spec_generator = FastPitchModel.from_pretrained("theodotus/tts_uk_fastpitch") spec_generator.eval() # Load vocoder from nemo.collections.tts.models import HifiGanModel vocoder = HifiGanModel.from_pretrained(model_name="theodotus/tts_uk_hifigan") vocoder.eval() ``` ### Generate audio ```python # Speaker # 0 - Mykyta # 1 - Lada # 2 - Tetiana speaker = 0 import soundfile as sf text = "К+ам'ян+ець-Под+ільський - м+істо в Хмельн+ицькій +області Укра+їни, ц+ентр Кам'ян+ець-Под+ільської міськ+ої об'+єднаної територі+альної гром+ади +і Кам'ян+ець-Под+ільського рай+ону." parsed = spec_generator.parse(text) spectrogram = spec_generator.generate_spectrogram(tokens=parsed, speaker=speaker) audio = vocoder.convert_spectrogram_to_audio(spec=spectrogram) ``` ### Save the generated audio file ```python # Save the audio to disk in a file called speech.wav sf.write("speech.wav", audio.to('cpu').detach().numpy()[0], 22050) ``` ### Input This model accepts batches of text. ### Output This model generates mel spectrograms. ## Model Architecture FastPitch is a fully-parallel text-to-speech model based on FastSpeech, conditioned on fundamental frequency contours. The model predicts pitch contours during inference. By altering these predictions, the generated speech can be more expressive, better match the semantic of the utterance, and in the end more engaging to the listener. FastPitch is based on a fully-parallel Transformer architecture, with a much higher real-time factor than Tacotron2 for the mel-spectrogram synthesis of a typical utterance. It uses an unsupervised speech-text aligner. ## Training The NeMo toolkit [3] was used for training the models for 1000 epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/fastpitch.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/conf/fastpitch_align_v1.05.yaml). ### Datasets This model is trained on LJSpeech sampled at 22050Hz, and has been tested on generating female English voices with an American accent. ## Performance No performance information is available at this time. ## Limitations This checkpoint only works well with vocoders that were trained on 22050Hz data. Otherwise, the generated audio may be scratchy or choppy-sounding. ## References - [1] [FastPitch: Parallel Text-to-speech with Pitch Prediction](https://arxiv.org/abs/2006.06873) - [2] [One TTS Alignment To Rule Them All](https://arxiv.org/abs/2108.10447) - [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
tmpusr/ppo-LunarLander-v2
tmpusr
2023-05-30T12:05:47Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T12:05:28Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 257.46 +/- 15.32 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Clannad06/sd-webui
Clannad06
2023-05-30T11:59:30Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-30T11:59:30Z
--- license: creativeml-openrail-m ---
HReynaud/EchoDiffusionWeights
HReynaud
2023-05-30T11:52:19Z
0
2
null
[ "license:gpl-2.0", "region:us" ]
null
2023-05-25T09:15:41Z
--- license: gpl-2.0 --- This repository contains 3 models, corresponding to the ones described in *Feature-Conditioned Cascaded Video Diffusion Models for Precise Echocardiogram Synthesis*. Hadrien Reynaud, Mengyun Qiao, Mischa Dombrowski, Thomas Day, Reza Razavi, Alberto Gomez, Paul Leeson and Bernhard Kainz. MICCAI 2023. To see all the details, refer to the corresponding github repository: [https://github.com/HReynaud/EchoDiffusion](https://github.com/HReynaud/EchoDiffusion). The available models are: * 1SCM: Single Stage Cascade Model * 2SCM: Two Stage Cascade Model * 4SCM: Four Stage Cascade Model All weights files contain the weights of all the diffusion model in the cascade.<br/> To see a demo of the 1SCM, head to [https://huggingface.co/spaces/HReynaud/echocardiogram-video-diffusion](https://huggingface.co/spaces/HReynaud/echocardiogram-video-diffusion). In each model folder, you will find: - `config.yaml`: the configuration file associated to the model. It contains the hyperparameters of the model. - `merged.pt`: the weight file containing all the models in the cascade for that model (ex. 4 models for the 4SCM).
GeneZC/bert-chinese-minilm-3L-384H
GeneZC
2023-05-30T11:48:56Z
33
0
transformers
[ "transformers", "pytorch", "bert", "dataset:HJHGJGHHG/ClueCorpusSmall", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-05-30T11:46:14Z
--- license: apache-2.0 datasets: - HJHGJGHHG/ClueCorpusSmall --- # Model details `minilm-3L-384H` distilled from `bert-base-chinese` on `ClueCorpusSmall`.
CleverShovel/vicuna-7b-1.1-sharded-bf16
CleverShovel
2023-05-30T11:48:49Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-29T21:31:19Z
Sharded version of [eachadea/vicuna-7b-1.1](https://huggingface.co/eachadea/vicuna-7b-1.1). Can be loaded in free tier Colab.
vind/a2c-AntBulletEnv-v0
vind
2023-05-30T11:38:54Z
1
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T11:37:53Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1625.64 +/- 176.51 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
86Jeremy/Thai
86Jeremy
2023-05-30T11:25:22Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-05-30T11:25:22Z
--- license: bigscience-openrail-m ---
GeneZC/bert-large-minilm-6L-384H
GeneZC
2023-05-30T11:22:48Z
31
0
transformers
[ "transformers", "pytorch", "bert", "dataset:wikipedia", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-05-30T11:18:44Z
--- license: apache-2.0 datasets: - wikipedia --- # Model details `minilm-6L-384H` distilled from `bert-large-uncased` on `Wikipedia`.
GeneZC/bert-base-minilm-3L-384H
GeneZC
2023-05-30T11:13:56Z
34
0
transformers
[ "transformers", "pytorch", "bert", "dataset:wikipedia", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-05-30T11:11:36Z
--- license: apache-2.0 datasets: - wikipedia --- # Model details `minilm-3L-384H` distilled from `bert-base-uncased` from `Wikipedia`.
Yuuki321/TiaLover
Yuuki321
2023-05-30T11:13:13Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-17T05:41:48Z
--- license: creativeml-openrail-m ---
GeneZC/bert-base-minilm-6L-384H
GeneZC
2023-05-30T11:08:05Z
36
0
transformers
[ "transformers", "pytorch", "bert", "dataset:wikipedia", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-05-30T11:03:29Z
--- license: apache-2.0 datasets: - wikipedia --- # Model details `minilm-6L-384H` distilled from `bert-base-uncased` on `Wikipedia`.
ankurgup510/metaorg
ankurgup510
2023-05-30T10:47:31Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-05-30T09:46:46Z
--- license: bigscience-openrail-m ---
Knair/Cayang
Knair
2023-05-30T10:45:49Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-30T10:43:13Z
--- license: creativeml-openrail-m ---
LOGQS/Taxi-v3
LOGQS
2023-05-30T10:21:32Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T10:21:28Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="LOGQS/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Gladiaio/mpt-7b-qlora
Gladiaio
2023-05-30T10:06:06Z
4
5
transformers
[ "transformers", "pytorch", "mpt", "text-generation", "Composer", "MosaicML", "llm-foundry", "StreamingDatasets", "custom_code", "dataset:mc4", "dataset:c4", "dataset:togethercomputer/RedPajama-Data-1T", "dataset:bigcode/the-stack", "dataset:allenai/s2orc", "arxiv:2108.12409", "arxiv:2302.13971", "arxiv:2205.14135", "arxiv:2010.04245", "arxiv:1909.08053", "arxiv:2302.06675", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-05-30T09:47:40Z
--- license: apache-2.0 tags: - Composer - MosaicML - llm-foundry - StreamingDatasets datasets: - mc4 - c4 - togethercomputer/RedPajama-Data-1T - bigcode/the-stack - allenai/s2orc inference: false --- # MPT-7B MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. This model was trained by [MosaicML](https://www.mosaicml.com). MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)). Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer). This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference. ### How is this model different? MPT-7B is * **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)). * **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)). * **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models). * **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)) * **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry) ### Models finetuned off MPT-7B: The following models are finetuned on MPT-7B: * [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths. Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3). At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens. We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b). * License: Apache 2.0 * [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following. Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. * License: _CC-By-SA-3.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct) * [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation. Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets. * License: _CC-By-NC-SA-4.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat) ## Model Date May 5, 2023 ## Model License Apache-2.0 ## Documentation * [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)! ## How to Use This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`: ```python config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) config.attn_config['attn_impl'] = 'triton' model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', config=config, torch_dtype=torch.bfloat16, trust_remote_code=True ) model.to(device='cuda:0') ``` Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) config.update({"max_seq_len": 4096}) model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', config=config, trust_remote_code=True ) ``` This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") ``` ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 6.7B | |n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 50432 | | sequence length | 2048 | ## Training Data ### Streaming Datasets Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training. StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset. ### Data Mix The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix: | Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs | |-------------|----------------------------|------------|----------------------------|--------| | mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 | | C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 | | RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 | | The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 | | RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 | | The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 | | S2ORC | 48.85 B | 0.033 | 33 B | 0.68 | | RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 | | RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 | | RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 | Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length. The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics, most of which are relevant for tokenizing code: (1) It was trained on a diverse mix of data that includes code (The Pile) (2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces (3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters. The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points. ### Training Configuration This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-7B (Base) is **not** intended for deployment without finetuning. It should not be used for human-facing interactions without further guardrails and user consent. MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-7B: A New Standard for Open-Source, ly Usable LLMs}, year = {2023}, url = {www.mosaicml.com/blog/mpt-7b}, note = {Accessed: 2023-03-28}, % change this date urldate = {2023-03-28} % change this date } ```
neverLife/nllb-200-distilled-600M-ja-zh
neverLife
2023-05-30T10:05:39Z
121
7
transformers
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "translation", "ja", "zh", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-05-15T12:22:15Z
--- language: - ja - zh metrics: - bleu pipeline_tag: translation --- 在1epoch的结果 ## 结果 在评估集上得到如下结果: - Loss: 1.3042 - Bleu: 55.834 - Gen Len: 17.2465 ## 使用DEMO ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model_path = "neverLife/nllb-200-distilled-600M-ja-zh" model = AutoModelForSeq2SeqLM.from_pretrained(model_path) ja = "ぜんぜん田舎に来た気がしないんだが……。" tokenizer = AutoTokenizer.from_pretrained(model_path, src_lang="jpn_Jpan", tgt_lang="zho_Hans") input_ids = tokenizer.encode(ja, max_length=128, padding=True, return_tensors='pt') outputs = model.generate(input_ids, num_beams=4, max_new_tokens=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## 框架版本 - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
amjadfqs/swin-base-patch4-window7-224-in22k-finetuned-brain-tumor-final_13
amjadfqs
2023-05-30T10:00:27Z
211
1
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-29T18:34:37Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy - precision model-index: - name: swin-base-patch4-window7-224-in22k-finetuned-brain-tumor-final_13 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9783974862529458 - name: Precision type: precision value: 0.9787264477445259 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-base-patch4-window7-224-in22k-finetuned-brain-tumor-final_13 This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0796 - Accuracy: 0.9784 - F1 Score: 0.9786 - Precision: 0.9787 - Sensitivity: 0.9790 - Specificity: 0.9946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 100 - eval_batch_size: 100 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 400 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | Precision | Sensitivity | Specificity | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------:|:-----------:|:-----------:| | 1.2276 | 0.99 | 19 | 0.5721 | 0.7891 | 0.7955 | 0.8401 | 0.7933 | 0.9454 | | 0.3873 | 1.97 | 38 | 0.2399 | 0.9195 | 0.9195 | 0.9207 | 0.9224 | 0.9796 | | 0.1287 | 2.96 | 57 | 0.2204 | 0.9230 | 0.9237 | 0.9275 | 0.9261 | 0.9806 | | 0.0882 | 4.0 | 77 | 0.1026 | 0.9647 | 0.9649 | 0.9647 | 0.9656 | 0.9911 | | 0.0605 | 4.99 | 96 | 0.0898 | 0.9678 | 0.9683 | 0.9686 | 0.9685 | 0.9919 | | 0.0439 | 5.97 | 115 | 0.0853 | 0.9741 | 0.9746 | 0.9748 | 0.9747 | 0.9935 | | 0.0275 | 6.96 | 134 | 0.0941 | 0.9721 | 0.9724 | 0.9730 | 0.9730 | 0.9930 | | 0.0186 | 8.0 | 154 | 0.0803 | 0.9764 | 0.9767 | 0.9770 | 0.9773 | 0.9941 | | 0.0165 | 8.99 | 173 | 0.0740 | 0.9780 | 0.9782 | 0.9782 | 0.9786 | 0.9945 | | 0.0106 | 9.87 | 190 | 0.0796 | 0.9784 | 0.9786 | 0.9787 | 0.9790 | 0.9946 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
benjamin/gpt2-wechsel-french
benjamin
2023-05-30T09:55:38Z
143
3
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "fr", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: fr license: mit --- # gpt2-wechsel-french Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. See the code here: https://github.com/CPJKU/wechsel And the paper here: https://aclanthology.org/2022.naacl-main.293/ ## Performance ### RoBERTa | Model | NLI Score | NER Score | Avg Score | |---|---|---|---| | `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** | | `camembert-base` | 80.88 | 90.26 | 85.57 | | Model | NLI Score | NER Score | Avg Score | |---|---|---|---| | `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** | | `deepset/gbert-base` | 78.64 | 89.46 | 84.05 | | Model | NLI Score | NER Score | Avg Score | |---|---|---|---| | `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** | | `bert-base-chinese` | 76.55 | **82.05** | 79.30 | | Model | NLI Score | NER Score | Avg Score | |---|---|---|---| | `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** | | `xlm-roberta-base` | 69.18 | 87.37 | 78.28 | ### GPT2 | Model | PPL | |---|---| | `gpt2-wechsel-french` | **19.71** | | `gpt2` (retrained from scratch) | 20.47 | | Model | PPL | |---|---| | `gpt2-wechsel-german` | **26.8** | | `gpt2` (retrained from scratch) | 27.63 | | Model | PPL | |---|---| | `gpt2-wechsel-chinese` | **51.97** | | `gpt2` (retrained from scratch) | 52.98 | | Model | PPL | |---|---| | `gpt2-wechsel-swahili` | **10.14** | | `gpt2` (retrained from scratch) | 10.58 | See our paper for details. ## Citation Please cite WECHSEL as ``` @inproceedings{minixhofer-etal-2022-wechsel, title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models", author = "Minixhofer, Benjamin and Paischer, Fabian and Rekabsaz, Navid", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.293", pages = "3992--4006", abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.", } ```
benjamin/gpt2-wechsel-swahili
benjamin
2023-05-30T09:55:30Z
119
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "sw", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: sw license: mit --- # gpt2-wechsel-swahili Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. See the code here: https://github.com/CPJKU/wechsel And the paper here: https://aclanthology.org/2022.naacl-main.293/ ## Performance ### RoBERTa | Model | NLI Score | NER Score | Avg Score | |---|---|---|---| | `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** | | `camembert-base` | 80.88 | 90.26 | 85.57 | | Model | NLI Score | NER Score | Avg Score | |---|---|---|---| | `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** | | `deepset/gbert-base` | 78.64 | 89.46 | 84.05 | | Model | NLI Score | NER Score | Avg Score | |---|---|---|---| | `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** | | `bert-base-chinese` | 76.55 | **82.05** | 79.30 | | Model | NLI Score | NER Score | Avg Score | |---|---|---|---| | `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** | | `xlm-roberta-base` | 69.18 | 87.37 | 78.28 | ### GPT2 | Model | PPL | |---|---| | `gpt2-wechsel-french` | **19.71** | | `gpt2` (retrained from scratch) | 20.47 | | Model | PPL | |---|---| | `gpt2-wechsel-german` | **26.8** | | `gpt2` (retrained from scratch) | 27.63 | | Model | PPL | |---|---| | `gpt2-wechsel-chinese` | **51.97** | | `gpt2` (retrained from scratch) | 52.98 | | Model | PPL | |---|---| | `gpt2-wechsel-swahili` | **10.14** | | `gpt2` (retrained from scratch) | 10.58 | See our paper for details. ## Citation Please cite WECHSEL as ``` @inproceedings{minixhofer-etal-2022-wechsel, title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models", author = "Minixhofer, Benjamin and Paischer, Fabian and Rekabsaz, Navid", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.293", pages = "3992--4006", abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.", } ```
benjamin/gerpt2
benjamin
2023-05-30T09:54:59Z
304
5
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "gpt2", "text-generation", "de", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: de widget: - text: "In einer schockierenden Entdeckung fanden Wissenschaftler eine Herde Einhörner, die in einem abgelegenen, zuvor unerforschten Tal in den Anden lebten." license: mit --- # GerPT2 German large and small versions of GPT2: - https://huggingface.co/benjamin/gerpt2 - https://huggingface.co/benjamin/gerpt2-large See the [GPT2 model card](https://huggingface.co/gpt2) for considerations on limitations and bias. See the [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html) for details on GPT2. ## Comparison to [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) I evaluated both GerPT2-large and the other German GPT2, [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on the [CC-100](http://data.statmt.org/cc-100/) dataset and on the German Wikipedia: | | CC-100 (PPL) | Wikipedia (PPL) | |-------------------|--------------|-----------------| | dbmdz/german-gpt2 | 49.47 | 62.92 | | GerPT2 | 24.78 | 35.33 | | GerPT2-large | __16.08__ | __23.26__ | | | | | See the script `evaluate.py` in the [GerPT2 Github repository](https://github.com/bminixhofer/gerpt2) for the code. ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("benjamin/gerpt2-large") model = AutoModelForCausalLM.from_pretrained("benjamin/gerpt2-large") prompt = "<your prompt>" pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) print(pipe(prompt)[0]["generated_text"]) ``` Also, two tricks might improve the generated text: ```python output = model.generate( # during training an EOS token was used to mark the beginning of each text # so it can help to insert it at the start torch.tensor( [tokenizer.eos_token_id] + tokenizer.encode(prompt) ).unsqueeze(0), do_sample=True, # try setting bad_words_ids=[[0]] to disallow generating an EOS token, without this the model is # prone to ending generation early because a significant number of texts from the training corpus # is quite short bad_words_ids=[[0]], max_length=max_length, )[0] print(tokenizer.decode(output)) ``` ## Training details GerPT2-large is trained on the entire German data from the [CC-100 Corpus](http://data.statmt.org/cc-100/) and weights were initialized from the [English GPT2 model](https://huggingface.co/gpt2-large). GerPT2-large was trained with: - a batch size of 256 - using OneCycle learning rate with a maximum of 5e-3 - with AdamW with a weight decay of 0.01 - for 2 epochs Training took roughly 12 days on 8 TPUv3 cores. To train GerPT2-large, follow these steps. Scripts are located in the [Github repository](https://github.com/bminixhofer/gerpt2): 0. Download and unzip training data from http://data.statmt.org/cc-100/. 1. Train a tokenizer using `prepare/train_tokenizer.py`. As training data for the tokenizer I used a random subset of 5% of the CC-100 data. 2. (optionally) generate a German input embedding matrix with `prepare/generate_aligned_wte.py`. This uses a neat trick to semantically map tokens from the English tokenizer to tokens from the German tokenizer using aligned word embeddings. E. g.: ``` ĠMinde -> Ġleast Ġjed -> Ġwhatsoever flughafen -> Air vermittlung -> employment teilung -> ignment ĠInterpretation -> Ġinterpretation Ġimport -> Ġimported hansa -> irl genehmigungen -> exempt ĠAuflist -> Ġlists Ġverschwunden -> Ġdisappeared ĠFlyers -> ĠFlyers Kanal -> Channel Ġlehr -> Ġteachers Ġnahelie -> Ġconvenient gener -> Generally mitarbeiter -> staff ``` This helps a lot on a trial run I did, although I wasn't able to do a full comparison due to budget and time constraints. To use this WTE matrix it can be passed via the `wte_path` to the training script. Credit to [this blogpost](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) for the idea of initializing GPT2 from English weights. 3. Tokenize the corpus using `prepare/tokenize_text.py`. This generates files for train and validation tokens in JSON Lines format. 4. Run the training script `train.py`! `run.sh` shows how this was executed for the full run with config `configs/tpu_large.json`. ## License GerPT2 is licensed under the MIT License. ## Citing Please cite GerPT2 as follows: ``` @misc{Minixhofer_GerPT2_German_large_2020, author = {Minixhofer, Benjamin}, doi = {10.5281/zenodo.5509984}, month = {12}, title = {{GerPT2: German large and small versions of GPT2}}, url = {https://github.com/bminixhofer/gerpt2}, year = {2020} } ``` ## Acknowledgements Thanks to [Hugging Face](https://huggingface.co) for awesome tools and infrastructure. Huge thanks to [Artus Krohn-Grimberghe](https://twitter.com/artuskg) at [LYTiQ](https://www.lytiq.de/) for making this possible by sponsoring the resources used for training.
benjamin/gerpt2-large
benjamin
2023-05-30T09:54:02Z
511
9
transformers
[ "transformers", "pytorch", "jax", "safetensors", "gpt2", "text-generation", "de", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: de widget: - text: "In einer schockierenden Entdeckung fanden Wissenschaftler eine Herde Einhörner, die in einem abgelegenen, zuvor unerforschten Tal in den Anden lebten." license: mit --- # GerPT2 German large and small versions of GPT2: - https://huggingface.co/benjamin/gerpt2 - https://huggingface.co/benjamin/gerpt2-large See the [GPT2 model card](https://huggingface.co/gpt2) for considerations on limitations and bias. See the [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html) for details on GPT2. ## Comparison to [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) I evaluated both GerPT2-large and the other German GPT2, [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on the [CC-100](http://data.statmt.org/cc-100/) dataset and on the German Wikipedia: | | CC-100 (PPL) | Wikipedia (PPL) | |-------------------|--------------|-----------------| | dbmdz/german-gpt2 | 49.47 | 62.92 | | GerPT2 | 24.78 | 35.33 | | GerPT2-large | __16.08__ | __23.26__ | | | | | See the script `evaluate.py` in the [GerPT2 Github repository](https://github.com/bminixhofer/gerpt2) for the code. ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("benjamin/gerpt2-large") model = AutoModelForCausalLM.from_pretrained("benjamin/gerpt2-large") prompt = "<your prompt>" pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) print(pipe(prompt)[0]["generated_text"]) ``` Also, two tricks might improve the generated text: ```python output = model.generate( # during training an EOS token was used to mark the beginning of each text # so it can help to insert it at the start torch.tensor( [tokenizer.eos_token_id] + tokenizer.encode(prompt) ).unsqueeze(0), do_sample=True, # try setting bad_words_ids=[[0]] to disallow generating an EOS token, without this the model is # prone to ending generation early because a significant number of texts from the training corpus # is quite short bad_words_ids=[[0]], max_length=max_length, )[0] print(tokenizer.decode(output)) ``` ## Training details GerPT2-large is trained on the entire German data from the [CC-100 Corpus](http://data.statmt.org/cc-100/) and weights were initialized from the [English GPT2 model](https://huggingface.co/gpt2-large). GerPT2-large was trained with: - a batch size of 256 - using OneCycle learning rate with a maximum of 5e-3 - with AdamW with a weight decay of 0.01 - for 2 epochs Training took roughly 12 days on 8 TPUv3 cores. To train GerPT2-large, follow these steps. Scripts are located in the [Github repository](https://github.com/bminixhofer/gerpt2): 0. Download and unzip training data from http://data.statmt.org/cc-100/. 1. Train a tokenizer using `prepare/train_tokenizer.py`. As training data for the tokenizer I used a random subset of 5% of the CC-100 data. 2. (optionally) generate a German input embedding matrix with `prepare/generate_aligned_wte.py`. This uses a neat trick to semantically map tokens from the English tokenizer to tokens from the German tokenizer using aligned word embeddings. E. g.: ``` ĠMinde -> Ġleast Ġjed -> Ġwhatsoever flughafen -> Air vermittlung -> employment teilung -> ignment ĠInterpretation -> Ġinterpretation Ġimport -> Ġimported hansa -> irl genehmigungen -> exempt ĠAuflist -> Ġlists Ġverschwunden -> Ġdisappeared ĠFlyers -> ĠFlyers Kanal -> Channel Ġlehr -> Ġteachers Ġnahelie -> Ġconvenient gener -> Generally mitarbeiter -> staff ``` This helps a lot on a trial run I did, although I wasn't able to do a full comparison due to budget and time constraints. To use this WTE matrix it can be passed via the `wte_path` to the training script. Credit to [this blogpost](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) for the idea of initializing GPT2 from English weights. 3. Tokenize the corpus using `prepare/tokenize_text.py`. This generates files for train and validation tokens in JSON Lines format. 4. Run the training script `train.py`! `run.sh` shows how this was executed for the full run with config `configs/tpu_large.json`. ## License GerPT2 is licensed under the MIT License. ## Citing Please cite GerPT2 as follows: ``` @misc{Minixhofer_GerPT2_German_large_2020, author = {Minixhofer, Benjamin}, doi = {10.5281/zenodo.5509984}, month = {12}, title = {{GerPT2: German large and small versions of GPT2}}, url = {https://github.com/bminixhofer/gerpt2}, year = {2020} } ``` ## Acknowledgements Thanks to [Hugging Face](https://huggingface.co) for awesome tools and infrastructure. Huge thanks to [Artus Krohn-Grimberghe](https://twitter.com/artuskg) at [LYTiQ](https://www.lytiq.de/) for making this possible by sponsoring the resources used for training.
vumichien/ppo-Huggy
vumichien
2023-05-30T09:39:14Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-05-30T09:39:06Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: vumichien/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
emresvd/u155
emresvd
2023-05-30T09:36:18Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-05-30T09:36:11Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
Uxinnn/ppo-LunarLander-v2
Uxinnn
2023-05-30T09:27:52Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T09:27:32Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.88 +/- 22.30 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
chidac/my_awesome_model
chidac
2023-05-30T09:11:24Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-30T08:48:56Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
danieliser/Reinforce-Pixelcopter-PLE-v0-1
danieliser
2023-05-30T09:00:57Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T08:51:13Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0-1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 89.10 +/- 51.48 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
rlanday/Reinforce-CartPole-v1
rlanday
2023-05-30T08:58:47Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T08:58:36Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 496.80 +/- 9.60 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Akira10/pegasus-samsum
Akira10
2023-05-30T08:42:16Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:samsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-30T08:00:38Z
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4828 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7041 | 0.54 | 500 | 1.4828 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
JFoz/dog-cat-pose
JFoz
2023-05-30T08:35:24Z
22
2
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "image-to-image", "controlnet", "jax-diffusers-event", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
image-to-image
2023-04-21T15:34:20Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - image-to-image - diffusers - controlnet - jax-diffusers-event inference: true library_name: diffusers --- # controlnet- JFoz/dog-cat-pose Simple controlnet model made as part of the HF JaX/Diffusers community sprint. These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with pose conditioning generated using the animalpose model of OpenPifPaf. Some example images can be found in the following prompt: a tortoiseshell cat is sitting on a cushion ![images_0)](./images_0.png) prompt: a yellow dog standing on a lawn ![images_1)](./images_1.png) Whilst not the dataset used for this model, a smaller dataset with the same format for conditioning images can be found at https://huggingface.co/datasets/JFoz/dog-poses-controlnet-dataset The dataset was generated using the code at https://github.com/jfozard/animalpose/tree/f1be80ed29886a1314054b87f2a8944ea98997ac # Model Card for dog-cat-pose This is an ControlNet model which allows users to control the pose of a dog or cat. Poses were extracted from images using the animalpose model of OpenPifPaf https://openpifpaf.github.io/intro.html . Skeleton colouring is as shown in the dataset. See also https://huggingface.co/JFoz/dog-pose # Model Details ## Model Description <!-- Provide a longer summary of what this model is/does. --> This is an ControlNet model which allows users to control the pose of a dog or cat. Poses were extracted from images using the animalpose model of OpenPifPaf https://openpifpaf.github.io/intro.html. Skeleton colouring is as shown in the dataset. See also https://huggingface.co/JFoz/dog-pose - **Developed by:** John Fozard - **Model type:** Conditional image generation - **Language(s) (NLP):** en - **License:** openrail - **Parent Model:** https://huggingface.co/runwayml/stable-diffusion-v1-5 - **Resources for more information:** - [GitHub Repo](https://github.com/jfozard/animalpose/tree/f1be80ed29886a1314054b87f2a8944ea98997ac) # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." --> Supply a suitable, potentially incomplete pose along with a relevant text prompt ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." --> Generating images of non-animals. We advise retaining the stable diffusion safety filter when using this model. # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> The model is trained on a relatively small dataset, and may be overfit to those images. ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Maintain careful supervision of model inputs and outputs. # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> Trained on a subset of Laion-5B using clip retrieval with the prompts &#34;a photo of a (dog/cat) (standing/walking)&#34; ## Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing Images were rescaled to 512 along their short edge and centrally cropped. The OpenPifPaf pose-detection model was used to extract poses, which were used to generate conditioning images. ## Compute Infrastructure TPUv4i ### Software Flax stable diffusion controlnet pipeline # Model Card Authors [optional] <!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. --> John Fozard
MakiPan/controlnet-encoded-hands-130k
MakiPan
2023-05-30T08:35:18Z
30
12
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "controlnet", "jax-diffusers-event", "image-to-image", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
image-to-image
2023-05-04T12:54:09Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - diffusers - controlnet - jax-diffusers-event - image-to-image inference: true --- # controlnet- MakiPan/controlnet-encoded-hands-20230504_125403 These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images in the following. prompt: a man in a colorful shirt giving a peace sign in front of a rallying crowd ![images_0)](./images_0.png) prompt: a police officer signaling someone to stop in a park ![images_1)](./images_1.png)
SAMControlNet/sd-controlnet-sam-seg
SAMControlNet
2023-05-30T08:35:01Z
26
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "image-to-image", "controlnet", "jax-diffusers-event", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
image-to-image
2023-04-30T17:12:03Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - image-to-image - diffusers - controlnet - jax-diffusers-event inference: true --- # controlnet- SAMControlNet/sd-controlnet-sam-seg These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images in the following. prompt: a dolphin jumping out of the water ![images_0)](./images_0.png) prompt: a antelope standing in a field with birds ![images_1)](./images_1.png)
Yelinz/LunarLander-v2-ppo-cleanrl
Yelinz
2023-05-30T08:35:00Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-05-29T16:33:47Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -11.64 +/- 62.75 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 500000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'Yelinz/LunarLander-v2-ppo-cleanrl' 'batch_size': 512 'minibatch_size': 128} ```
y59/discordgagging-350m
y59
2023-05-30T08:25:29Z
115
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "license:creativeml-openrail-m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-30T08:22:49Z
--- license: creativeml-openrail-m ---
NickThe1/Taxi-v3
NickThe1
2023-05-30T07:57:03Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T07:57:00Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="NickThe1/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
NickThe1/q-FrozenLake-v1-4x4-noSlippery
NickThe1
2023-05-30T07:52:40Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T07:52:37Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="NickThe1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Laegrinna/Tokyolagii
Laegrinna
2023-05-30T07:42:27Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-30T07:31:12Z
--- license: creativeml-openrail-m ---
big91987/111222
big91987
2023-05-30T07:40:42Z
0
0
allennlp
[ "allennlp", "chemistry", "zero-shot-classification", "zh", "dataset:detection-datasets/coco", "license:openrail", "region:us" ]
zero-shot-classification
2023-05-30T07:36:06Z
--- license: openrail datasets: - detection-datasets/coco language: - zh metrics: - accuracy library_name: allennlp pipeline_tag: zero-shot-classification tags: - chemistry ---
MSG3/setfit_model
MSG3
2023-05-30T07:35:41Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-05-30T07:34:21Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # MSG3/setfit_model This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("MSG3/setfit_model") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
SumDeed/sku_output
SumDeed
2023-05-30T07:35:21Z
0
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-05-29T06:49:26Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a golden oreo mini pack tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - SumDeed/sku_output These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a golden oreo mini pack using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
vnykr/Reinforce-CartPole-v1
vnykr
2023-05-30T07:18:22Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-30T07:18:12Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
a-grishman/bert-base-banking77-pt2
a-grishman
2023-05-30T07:17:48Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:banking77", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-29T20:14:10Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - banking77 metrics: - f1 model-index: - name: bert-base-banking77-pt2 results: - task: name: Text Classification type: text-classification dataset: name: banking77 type: banking77 config: default split: test args: default metrics: - name: F1 type: f1 value: 0.9368591300797698 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-banking77-pt2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset. It achieves the following results on the evaluation set: - Loss: 0.2758 - F1: 0.9369 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7116 | 1.0 | 1251 | 0.5905 | 0.8722 | | 0.2675 | 2.0 | 2502 | 0.3136 | 0.9229 | | 0.16 | 3.0 | 3753 | 0.2758 | 0.9369 | ### Framework versions - Transformers 4.27.1 - Pytorch 2.0.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.3
ebisuke/liz-nojaloli-ja
ebisuke
2023-05-30T07:01:20Z
11
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "ja", "dataset:ebisuke/liz-nojaloli-ja-ds", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-23T07:59:52Z
--- language: - ja datasets: - ebisuke/liz-nojaloli-ja-ds --- # ebisuke/liz-nojaloli-ja ## License [MIT License](https://opensource.org/licenses/MIT) ベースとして[rinna/japanese-gpt-neox-3.6b](https://huggingface.co/rinna/japanese-gpt-neox-3.6b)を使用しています。 ## Description のじゃロリ風味チャットモデルです。 [rinna/japanese-gpt-neox-3.6b](https://huggingface.co/rinna/japanese-gpt-neox-3.6b)をベースとしてファインチューンしています。 開発者の趣味と個人的な勉強用の為に作成しました。 __本モデルは開発中のため、データセットの更新により逐次アップデートされる可能性があります。__ ## Datasets ファインチューンでは以下のデータセットのみ使用しています。 [ebisuke/liz-nojaloli-ja-ds](https://huggingface.co/datasets/ebisuke/liz-nojaloli-ja-ds) ## Usage ユーザーの入力を"`相手は言いました。「(内容)」\n`"で括ってください。 モデルは"`あなたは言いました。「`"以降の文脈を生成します。 それ以降も続く場合があるので必要に応じて"`」`"の文字までで打ち切ってください。 長文を打つと口調が剥がれるのでご注意ください。 ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("ebisuke/liz-nojaloli-ja", use_fast=False) model = AutoModelForCausalLM.from_pretrained("ebisuke/liz-nojaloli-ja", load_in_8bit=True, device_map='auto') text = "相手は言いました。「眠いにゃ・・・」 \nあなたは言いました。「" token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( input_ids=token_ids.to(model.device), max_new_tokens=1000, do_sample=True, temperature=0.7, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id, ) output = tokenizer.decode(output_ids.tolist()[0]) print(output) ``` ## Plan - RLHFとかに挑戦してみる。→23/05/30ごく小さいデータセットで試行 - プロンプトの記述方法を、既存のチャットモデルのフォーマットに合わせるか検討中。 - 指示をあまり受け付けない・物を知らない方が好みのため、instructionモデルへ切り替える予定はありません。
MAGAer13/mplug-owl-bloomz-7b-multilingual
MAGAer13
2023-05-30T07:00:38Z
46
10
transformers
[ "transformers", "pytorch", "mplug-owl", "image-to-text", "en", "zh", "fr", "ja", "multilingual", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-to-text
2023-05-30T05:36:50Z
--- license: apache-2.0 language: - en - zh - fr - ja - multilingual pipeline_tag: image-to-text tags: - mplug-owl --- # Usage ## Get the latest codebase from Github ```Bash git clone https://github.com/X-PLUG/mPLUG-Owl.git ``` ## Model initialization ```Python from transformers import AutoTokenizer from mplug_owl.modeling_mplug_owl import MplugOwlForConditionalGeneration from mplug_owl.processing_mplug_owl import MplugOwlImageProcessor, MplugOwlProcessor pretrained_ckpt = 'MAGAer13/mplug-owl-bloomz-7b-multilingual' model = MplugOwlForConditionalGeneration.from_pretrained( pretrained_ckpt, torch_dtype=torch.bfloat16, ) image_processor = MplugOwlImageProcessor.from_pretrained(pretrained_ckpt) tokenizer = AutoTokenizer.from_pretrained(pretrained_ckpt) processor = MplugOwlProcessor(image_processor, tokenizer) ``` ## Model inference Prepare model inputs. ```Python # We use a human/AI template to organize the context as a multi-turn conversation. # <image> denotes an image placeholder. prompts = [ '''The following is a conversation between a curious human and AI assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. Human: <image> Human: Explain why this meme is funny. AI: '''] # The image paths should be placed in the image_list and kept in the same order as in the prompts. # We support urls, local file paths, and base64 string. You can customise the pre-processing of images by modifying the mplug_owl.modeling_mplug_owl.ImageProcessor image_list = ['https://xxx.com/image.jpg'] ``` Get response. ```Python # generate kwargs (the same in transformers) can be passed in the do_generate() generate_kwargs = { 'do_sample': True, 'top_k': 5, 'max_length': 512 } from PIL import Image images = [Image.open(_) for _ in image_list] inputs = processor(text=prompts, images=images, return_tensors='pt') inputs = {k: v.bfloat16() if v.dtype == torch.float else v for k, v in inputs.items()} inputs = {k: v.to(model.device) for k, v in inputs.items()} with torch.no_grad(): res = model.generate(**inputs, **generate_kwargs) sentence = tokenizer.decode(res.tolist()[0], skip_special_tokens=True) print(sentence) ```
Akira10/xlm-roberta-base-finetuned-panx-it
Akira10
2023-05-30T06:47:19Z
114
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-30T06:44:58Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.it split: validation args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8332647179909428 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2442 - F1: 0.8333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8366 | 1.0 | 70 | 0.3126 | 0.7444 | | 0.2814 | 2.0 | 140 | 0.2561 | 0.8094 | | 0.1843 | 3.0 | 210 | 0.2442 | 0.8333 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Akira10/xlm-roberta-base-finetuned-panx-fr
Akira10
2023-05-30T06:44:47Z
101
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-30T06:41:56Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.fr split: validation args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8423885618166527 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2707 - F1: 0.8424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5862 | 1.0 | 191 | 0.3257 | 0.7841 | | 0.2586 | 2.0 | 382 | 0.2732 | 0.8262 | | 0.1714 | 3.0 | 573 | 0.2707 | 0.8424 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
amittian/setfit_active_service_multi_label_version_0_0_2
amittian
2023-05-30T06:38:01Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-05-30T06:37:10Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # amittian/setfit_active_service_multi_label_version_0_0_2 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("amittian/setfit_active_service_multi_label_version_0_0_2") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
fredymad/Financial_laxo_2e-5_16_2
fredymad
2023-05-30T06:33:21Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-30T06:26:57Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: Financial_laxo_2e-5_16_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Financial_laxo_2e-5_16_2 This model is a fine-tuned version of [fredymad/Financial_estricto_2e-5_16_2](https://huggingface.co/fredymad/Financial_estricto_2e-5_16_2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3601 - Accuracy: 0.8762 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 400 | 0.3033 | 0.8743 | | 0.3393 | 2.0 | 800 | 0.3601 | 0.8762 | ### Framework versions - Transformers 4.29.0 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
xmj2002/gpt2_tang_poetry
xmj2002
2023-05-30T06:31:12Z
107
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "zh", "dataset:xmj2002/tang_poems", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-30T05:11:49Z
--- license: apache-2.0 datasets: - xmj2002/tang_poems language: - zh --- 使用的预训练模型为[uer/gpt2-chinese-cluecorpussmall](https://huggingface.co/uer/gpt2-chinese-cluecorpussmall) ## Usage ```python from transformers import AutoModelForCausalLM from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("xmj2002/gpt2_tang_poetry") model = AutoModelForCausalLM.from_pretrained("xmj2002/gpt2_tang_poetry") text = "白居易《远方》" inputs = tokenizer(text, return_tensors="pt").input_ids outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=100, top_p=0.95) tokenizer.decode(outputs[0], skip_special_tokens=True) ```
Akira10/xlm-roberta-base-finetuned-panx-de
Akira10
2023-05-30T06:28:26Z
100
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-30T06:24:03Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: validation args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8609120891618334 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1400 - F1: 0.8609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2581 | 1.0 | 525 | 0.1584 | 0.8233 | | 0.1252 | 2.0 | 1050 | 0.1384 | 0.8491 | | 0.0811 | 3.0 | 1575 | 0.1400 | 0.8609 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Sovenok-Hacker/nanoalpaca-3b
Sovenok-Hacker
2023-05-30T05:59:54Z
0
6
null
[ "question-answering", "en", "dataset:databricks/databricks-dolly-15k", "license:gpl-3.0", "region:us" ]
question-answering
2023-05-29T09:45:13Z
--- license: gpl-3.0 datasets: - databricks/databricks-dolly-15k language: - en pipeline_tag: question-answering --- Minimal Alpaca-LORA trained with [databricks/databricks-dolly-v2-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset and based on [OpenLLaMA-3B-600BT](https://huggingface.co/openlm-research/open_llama_3b_600bt_preview). There is a pre-trained LoRA adapter and a [Colab Jupyter notebook](https://colab.research.google.com/#fileId=https://huggingface.co/Sovenok-Hacker/openalpaca-3b/blob/main/finetune.ipynb) for fine-tuning (about 3 hours for 1 epoch on T4).