modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-30 00:39:23
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
526 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-30 00:39:08
card
stringlengths
11
1.01M
ChristinValiyaveettil/Imagenet
ChristinValiyaveettil
2023-05-11T09:23:47Z
3
0
tf-keras
[ "tf-keras", "mobilenet", "image-classification", "region:us" ]
image-classification
2023-05-11T09:19:44Z
--- pipeline_tag: image-classification ---
ilhkn/my-awesome-setfit-model3
ilhkn
2023-05-11T09:17:17Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-05-11T09:16:53Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 10 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 10, "warmup_steps": 1, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
joelewis2000/RL-PyramidRND
joelewis2000
2023-05-11T09:03:54Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-05-11T09:03:48Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: joelewis2000/RL-PyramidRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
nakcnx/setfit-paraphrase-multilingual-MiniLM-bad_topic
nakcnx
2023-05-11T08:56:29Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-05-11T08:54:08Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # nakcnx/setfit-paraphrase-multilingual-MiniLM-bad_topic This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("nakcnx/setfit-paraphrase-multilingual-MiniLM-bad_topic") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
Nico4one/nicola4ne_ppo-LunarLander-v2
Nico4one
2023-05-11T08:55:54Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-11T08:53:32Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 275.17 +/- 15.12 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
yuanzhoulvpi/xiaoheizi-3b
yuanzhoulvpi
2023-05-11T08:55:29Z
23
6
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "zh", "dataset:BelleGroup/train_1M_CN", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-04-28T05:15:56Z
--- license: apache-2.0 datasets: - BelleGroup/train_1M_CN language: - zh library_name: transformers --- ## 介绍 1. 模型名称:`xiaoheizi`(`小黑子`),现在国内的大模型起的名字一个比一个厉害,那我们和别人做点不一样的,主打一个随意~ 2. 模型参数量:是一个参数量为`3B`的模型 3. 模型:模型基于[`dolly_v2_3b`](https://huggingface.co/databricks/dolly-v2-3b)模型,而`dolly_v2_3b`模型本质上就是[`gpt_neox`](https://huggingface.co/models?other=gpt_neox)模型 4. 数据:基于[`BelleGroup/train_1M_CN`](https://huggingface.co/datasets/BelleGroup/train_1M_CN)数据做的训练 5. 商用:可以商用 6. GitHub仓库:本项目的训练和推理代码已经全部开源[https://github.com/yuanzhoulvpi2017/zero_nlp/blob/main/chinese_dolly_v2_3b](https://github.com/yuanzhoulvpi2017/zero_nlp/blob/main/chinese_dolly_v2_3b) ## 使用方法 ### 推理 #### step1 ```bash git clone git@github.com:yuanzhoulvpi2017/zero_nlp.git cd chinese_dolly_v2_3b ``` #### step2 ```python from transformers import GPTNeoXForCausalLM,AutoTokenizer from dolly.generate import generate_response,load_model_tokenizer_for_generate import torch model, tokenizer = load_model_tokenizer_for_generate(pretrained_model_name_or_path="yuanzhoulvpi/xiaoheizi-3b") for i in range(10): print("*" * 100) res = generate_response(instruction="列出三个在二十世纪被认为是最伟大的小说之一的书名,以及它们的作者。", model=model,tokenizer=tokenizer) print(res) torch.cuda.empty_cache() # **************************************************************************************************** # - 《致命招若众》 by 黄伟作为一部历史传说学者的作品,创作于 1898 年。 # - 《讲故事人生》 by 苏珊作为一部历史传说,近年的迈克尔·道尔莎尼的重装作品,创作于1999年。 # - 《金头银头》 by 洛克菲尔德·迪伊纳作为一部小说、小说之旅、花园小说家之一,创作于1958年。 ``` ### 训练 1. 使用`trainer_dolly.py`文件训练即可
GregoRio123/smprn
GregoRio123
2023-05-11T08:52:34Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-11T08:32:18Z
--- license: creativeml-openrail-m ---
edhrdh/xdhrdhfc
edhrdh
2023-05-11T08:51:58Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-11T08:48:20Z
--- license: creativeml-openrail-m ---
lewtun/tiny-random-OPTForCausalLM-delta
lewtun
2023-05-11T08:48:59Z
0
0
null
[ "region:us" ]
null
2023-05-11T07:57:07Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for `tiny-random-OPTForCausalLM-delta` <!-- Provide a quick summary of what the model is/does. --> This model is used for testing `peft` is compatible with model repos that store their delta weights as separate branches. Check out the `v1` branch for the weights.
daayuu/PPO-LunarLander-v2
daayuu
2023-05-11T08:42:20Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-11T08:41:52Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 259.53 +/- 27.73 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
BlueAvenir/sti_modern_workplace_class_model
BlueAvenir
2023-05-11T08:40:23Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-05-05T11:32:27Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 300 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 300, "warmup_steps": 30, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Neronuser/q-FrozenLake-v1-4x4-noSlippery
Neronuser
2023-05-11T08:24:02Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-11T08:24:00Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Neronuser/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
akmalartsai/OhataShieri
akmalartsai
2023-05-11T08:19:28Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-11T08:11:58Z
--- license: creativeml-openrail-m ---
0x70DA/EnabledChat-lora
0x70DA
2023-05-11T08:12:36Z
0
0
null
[ "pytorch", "tensorboard", "generated_from_trainer", "text-generation", "license:other", "region:us" ]
text-generation
2023-05-07T17:48:31Z
--- license: other tags: - generated_from_trainer model-index: - name: EnabledChat results: [] pipeline_tag: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EnabledChat This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1152 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2849 | 0.06 | 200 | 1.2346 | | 1.2515 | 0.13 | 400 | 1.2077 | | 1.1859 | 0.19 | 600 | 1.1948 | | 1.2007 | 0.26 | 800 | 1.1845 | | 1.2085 | 0.32 | 1000 | 1.1763 | | 1.2036 | 0.38 | 1200 | 1.1710 | | 1.2134 | 0.45 | 1400 | 1.1655 | | 1.1869 | 0.51 | 1600 | 1.1610 | | 1.2017 | 0.58 | 1800 | 1.1573 | | 1.2139 | 0.64 | 2000 | 1.1537 | | 1.1302 | 0.7 | 2200 | 1.1510 | | 1.1799 | 0.77 | 2400 | 1.1480 | | 1.1518 | 0.83 | 2600 | 1.1461 | | 1.1752 | 0.9 | 2800 | 1.1434 | | 1.1584 | 0.96 | 3000 | 1.1415 | | 1.1311 | 1.02 | 3200 | 1.1400 | | 1.1595 | 1.09 | 3400 | 1.1374 | | 1.1721 | 1.15 | 3600 | 1.1365 | | 1.1597 | 1.22 | 3800 | 1.1355 | | 1.1718 | 1.28 | 4000 | 1.1335 | | 1.1639 | 1.34 | 4200 | 1.1315 | | 1.1448 | 1.41 | 4400 | 1.1312 | | 1.1645 | 1.47 | 4600 | 1.1296 | | 1.1562 | 1.54 | 4800 | 1.1282 | | 1.1619 | 1.6 | 5000 | 1.1273 | | 1.1794 | 1.66 | 5200 | 1.1270 | | 1.178 | 1.73 | 5400 | 1.1254 | | 1.1291 | 1.79 | 5600 | 1.1246 | | 1.1235 | 1.86 | 5800 | 1.1234 | | 1.1169 | 1.92 | 6000 | 1.1222 | | 1.1494 | 1.98 | 6200 | 1.1214 | | 1.1032 | 2.05 | 6400 | 1.1218 | | 1.1269 | 2.11 | 6600 | 1.1207 | | 1.1561 | 2.18 | 6800 | 1.1203 | | 1.152 | 2.24 | 7000 | 1.1199 | | 1.1335 | 2.3 | 7200 | 1.1191 | | 1.1341 | 2.37 | 7400 | 1.1184 | | 1.0963 | 2.43 | 7600 | 1.1180 | | 1.1179 | 2.5 | 7800 | 1.1172 | | 1.1176 | 2.56 | 8000 | 1.1170 | | 1.15 | 2.62 | 8200 | 1.1165 | | 1.073 | 2.69 | 8400 | 1.1164 | | 1.1737 | 2.75 | 8600 | 1.1156 | | 1.1251 | 2.82 | 8800 | 1.1156 | | 1.1548 | 2.88 | 9000 | 1.1154 | | 1.1046 | 2.94 | 9200 | 1.1152 | ### Framework versions - Transformers 4.30.0.dev0 - Pytorch 2.0.0 - Datasets 2.11.0 - Tokenizers 0.13.3
akmalartsai/YunaToyoda
akmalartsai
2023-05-11T08:11:20Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-11T08:07:32Z
--- license: creativeml-openrail-m ---
intanm/mlm-20230511-indobert-large-p1-combined-pt1
intanm
2023-05-11T07:53:01Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-05-11T07:04:51Z
--- license: mit tags: - generated_from_trainer model-index: - name: mlm-20230511-indobert-large-p1-combined-pt1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mlm-20230511-indobert-large-p1-combined-pt1 This model is a fine-tuned version of [intanm/mlm-20230510-indobert-large-p1-001-pt2](https://huggingface.co/intanm/mlm-20230510-indobert-large-p1-001-pt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7543 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 284 | 2.4863 | | 2.8111 | 2.0 | 568 | 2.2089 | | 2.8111 | 3.0 | 852 | 2.0673 | | 2.1954 | 4.0 | 1136 | 1.9760 | | 2.1954 | 5.0 | 1420 | 1.8531 | | 1.928 | 6.0 | 1704 | 1.8520 | | 1.928 | 7.0 | 1988 | 1.7769 | | 1.7843 | 8.0 | 2272 | 1.7610 | | 1.7156 | 9.0 | 2556 | 1.7531 | | 1.7156 | 10.0 | 2840 | 1.7584 | ### Framework versions - Transformers 4.29.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
PaulineSanchez/Modele_Trad_Food_en_fr
PaulineSanchez
2023-05-11T07:51:05Z
103
0
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "food", "en", "fr", "dataset:PaulineSanchez/Trad_food", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-09T16:46:01Z
--- datasets: - PaulineSanchez/Trad_food language: - en - fr metrics: - bleu tags: - food --- # Model_trad_food_en_fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the PaulineSanchez/Trad_food dataset. It achieves the following results on the evaluation set: - Loss: 0.4698 - Bleu: 74.7824 - Gen Len: 10.7911 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
sillon/DialoGPT-small-HospitalBot
sillon
2023-05-11T07:33:33Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "Conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-11T04:56:02Z
--- tags: - Conversational --- ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("sillon/DialoGPT-small-HospitalBot") model = AutoModelForCausalLM.from_pretrained("sillon/DialoGPT-small-HospitalBot") # Let's chat for 5 lines for step in range(5): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id) # pretty print last ouput tokens from bot print("HospitalBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
AllanFrostin/analise-morfossintatica-ptbr
AllanFrostin
2023-05-11T07:23:25Z
17
2
transformers
[ "transformers", "pytorch", "bert", "token-classification", "autotrain", "pt", "dataset:AllanFrost/autotrain-data-analise-morfosintatica-ptbr", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-11T05:11:41Z
--- tags: - autotrain - token-classification language: - pt widget: - text: "O rato roeu a roupa do rei de Roma." datasets: - AllanFrost/autotrain-data-analise-morfosintatica-ptbr co2_eq_emissions: emissions: 6.411793801438884 --- # Modelo de Morfossintaxe Treinado Usando AutoTrain - Dev: AllanFrostin 'MPL-2.0 license' O modelo de Allan Frostin é um modelo de análise morfossintática para a língua portuguesa, que utiliza técnicas de aprendizado automático para identificar diferentes tipos de palavras em uma frase, como verbos, substantivos e adjetivos. Ele é capaz de analisar a estrutura gramatical de uma frase e identificar entidades específicas dentro dela. O modelo foi treinado usando AutoTrain, uma técnica que automatiza o processo de treinamento de modelos de aprendizado de máquina. Ele foi avaliado usando várias métricas de validação, incluindo perda, precisão, recall e pontuação F1, para determinar sua qualidade e desempenho em tarefas específicas. O ID do modelo é 57436132788 e ele apresentou emissões de CO2 de 6.4118 gramas, o que pode ser útil para avaliar seu impacto ambiental. Para acessar o modelo, é possível utilizar o cURL ou a API do Python. Com esse modelo, é possível identificar entidades em um texto. Ao tokenizar o texto usando o tokenizer e passar esses tokens para o modelo, é possível obter as previsões. Em um exemplo de uso, o modelo foi capaz de identificar várias entidades na frase "O rato roeu a roupa do rei de Roma", incluindo um adjetivo, um substantivo, um artigo, uma preposição e dois substantivos próprios. Espero que isso ajude! Qualquer outra dúvida, é só perguntar. ``` AllanFrostin
jules654/q-FrozenLake-v1-4x4-noSlippery
jules654
2023-05-11T07:23:09Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-11T07:23:06Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="jules654/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
MochaPixel/Dolllikeness
MochaPixel
2023-05-11T07:03:22Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-11T06:53:52Z
--- license: creativeml-openrail-m ---
TasmiaAzmi/t5-end-to-end-questions-generation
TasmiaAzmi
2023-05-11T06:41:09Z
157
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-11T03:51:06Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-end-to-end-questions-generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-end-to-end-questions-generation This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7656 | 0.43 | 100 | 1.3979 | | 1.5042 | 0.86 | 200 | 1.3440 | | 1.3992 | 1.3 | 300 | 1.3230 | | 1.3654 | 1.73 | 400 | 1.3035 | | 1.3329 | 2.16 | 500 | 1.2980 | | 1.2613 | 2.59 | 600 | 1.2880 | | 1.2684 | 3.02 | 700 | 1.2798 | | 1.2107 | 3.45 | 800 | 1.2847 | | 1.2081 | 3.89 | 900 | 1.2808 | | 1.1707 | 4.32 | 1000 | 1.2881 | | 1.1633 | 4.75 | 1100 | 1.2814 | | 1.1607 | 5.18 | 1200 | 1.2839 | | 1.1272 | 5.61 | 1300 | 1.2852 | | 1.1266 | 6.05 | 1400 | 1.2838 | | 1.1045 | 6.48 | 1500 | 1.2857 | | 1.1117 | 6.91 | 1600 | 1.2854 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.12.0 - Tokenizers 0.11.0
r1ck/doc2query-viT5
r1ck
2023-05-11T06:16:25Z
9
2
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "vi", "arxiv:1904.08375", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-02-01T03:20:31Z
--- language: vi widget: - text: >- Chiến tranh Lạnh (1947-1991, tiếng Anh: Cold War) là chỉ đến sự căng thẳng địa chính trị và xung đột ý thức hệ đỉnh điểm giữa hai siêu cường (đứng đầu và đại diện hai khối đối lập) : Hoa Kỳ (chủ nghĩa tư bản) và Liên Xô (chủ nghĩa xã hội). license: apache-2.0 pipeline_tag: text2text-generation --- # doc2query-viT5 This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on [viT5](https://huggingface.co/VietAI/vit5-base) It can be used for: - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. ## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch model_name = 'r1ck/doc2query-viT5' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) text = "Chiến tranh Lạnh (1947-1991, tiếng Anh: Cold War) là chỉ đến sự căng thẳng địa chính trị và xung đột ý thức hệ đỉnh điểm giữa hai siêu cường (đứng đầu và đại diện hai khối đối lập): Hoa Kỳ (chủ nghĩa tư bản) và Liên Xô (chủ nghĩa xã hội)." def create_queries(para): input_ids = tokenizer.encode(para, return_tensors='pt') with torch.no_grad(): # Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality sampling_outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, top_k=10, num_return_sequences=5 ) # Here we use Beam-search. It generates better quality queries, but with less diversity beam_outputs = model.generate( input_ids=input_ids, max_length=64, num_beams=5, no_repeat_ngram_size=2, num_return_sequences=5, early_stopping=True ) print("Paragraph:") print(para) print("\nBeam Outputs:") for i in range(len(beam_outputs)): query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') print("\nSampling Outputs:") for i in range(len(sampling_outputs)): query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') create_queries(text) ``` **Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it. ## Training This model fine-tuned [VietAI/vit5-base](https://huggingface.co/VietAI/vit5-base) on 30k vietnamese passage-question pairs
jiawei1998/metaner
jiawei1998
2023-05-11T06:15:44Z
103
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-10T08:58:22Z
--- language: - en pipeline_tag: text2text-generation --- Related to https://github.com/chen700564/metaner-icl
PareshK/q-FrozenLake-v1-4x4-noSlippery
PareshK
2023-05-11T06:10:39Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-11T06:10:35Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="PareshK/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
vesteinn/DanskBERT
vesteinn
2023-05-11T05:53:03Z
456
5
transformers
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "fill-mask", "roberta", "danish", "masked-lm", "da", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-11-23T07:46:33Z
--- language: da widget: - text: En trend, der kan blive ligeså hot som<mask>. tags: - roberta - danish - masked-lm - pytorch license: cc-by-4.0 --- # DanskBERT This is DanskBERT, a Danish language model. Note that you should not prepend the mask with a space when using it directly! The model is the best performing base-size model on the [ScandEval benchmark for Danish](https://scandeval.github.io/nlu-benchmark/). DanskBERT was trained on the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2021). DanskBERT was trained using fairseq using the RoBERTa-base configuration. The model was trained with a batch size of 2k, and was trained to convergence for 500k steps using 16 V100 cards for approximately two weeks. If you find this model useful, please cite ``` @inproceedings{snaebjarnarson-etal-2023-transfer, title = "{T}ransfer to a Low-Resource Language via Close Relatives: The Case Study on Faroese", author = "Snæbjarnarson, Vésteinn and Simonsen, Annika and Glavaš, Goran and Vulić, Ivan", booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)", month = "may 22--24", year = "2023", address = "Tórshavn, Faroe Islands", publisher = {Link{\"o}ping University Electronic Press, Sweden}, } ```
zm87/zm
zm87
2023-05-11T05:43:11Z
0
0
adapter-transformers
[ "adapter-transformers", "zh", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "region:us" ]
null
2023-04-26T22:58:11Z
--- license: apache-2.0 datasets: - OpenAssistant/oasst1 language: - zh metrics: - accuracy library_name: adapter-transformers ---
wengnews/tuning_llama_rl_checkpointsstep_9
wengnews
2023-05-11T05:42:10Z
0
0
transformers
[ "transformers", "pytorch", "trl", "reinforcement-learning", "license:apache-2.0", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-05-11T05:39:44Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="wengnews//tmp/tmpf8s2lv31/wengnews/tuning_llama_rl_checkpointsstep_9") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("wengnews//tmp/tmpf8s2lv31/wengnews/tuning_llama_rl_checkpointsstep_9") model = AutoModelForCausalLMWithValueHead.from_pretrained("wengnews//tmp/tmpf8s2lv31/wengnews/tuning_llama_rl_checkpointsstep_9") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
Nouzen009/ayasnohjb
Nouzen009
2023-05-11T05:37:56Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-11T05:27:56Z
--- license: creativeml-openrail-m ---
VirSingh/finetuning-emotion-model
VirSingh
2023-05-11T05:35:45Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-04-29T11:47:13Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion model-index: - name: finetuning-emotion-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-emotion-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) and [clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) on the emotion dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ##Labels 0 - sadness 1 - joy 2 - love 3 - anger 4 - fear 5 - surprise ## Training procedure ##Training Results Epoch Training Loss Validation Loss Accuracy F1 1 No log 0.210356 0.927500 0.927196 2 0.230100 0.207957 0.930500 0.930676 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
manishankar5507/Taxi-v3-qlearning
manishankar5507
2023-05-11T05:33:21Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-11T05:33:18Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-qlearning results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.77 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="manishankar5507/Taxi-v3-qlearning", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
moghis/my-ppo-LunarLander
moghis
2023-05-11T04:34:42Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-11T04:34:20Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 235.66 +/- 26.09 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
PKUWilliamYang/StyleGANEX
PKUWilliamYang
2023-05-11T04:15:54Z
0
9
pytorch
[ "pytorch", "face-manipulation", "image-to-image", "en", "arxiv:2303.06146", "region:us" ]
image-to-image
2023-03-11T13:13:11Z
--- library_name: pytorch tags: - face-manipulation language: - en pipeline_tag: image-to-image --- ## Model Details This system provides a web demo for the following paper: **VToonify: Controllable High-Resolution Portrait Video Style Transfer (TOG/SIGGRAPH Asia 2022)** - Developed by: Shuai Yang, Liming Jiang, Ziwei Liu and Chen Change Loy - Resources for more information: - [Project Page](https://www.mmlab-ntu.com/project/styleganex/) - [Research Paper](https://arxiv.org/abs/2303.06146) - [GitHub Repo](https://github.com/williamyang1991/StyleGANEX) **Abstract** > Recent advances in face manipulation using StyleGAN have produced impressive results. However, StyleGAN is inherently limited to cropped aligned faces at a fixed image resolution it is pre-trained on. In this paper, we propose a simple and effective solution to this limitation by using dilated convolutions to rescale the receptive fields of shallow layers in StyleGAN, without altering any model parameters. This allows fixed-size small features at shallow layers to be extended into larger ones that can accommodate variable resolutions, making them more robust in characterizing unaligned faces. To enable real face inversion and manipulation, we introduce a corresponding encoder that provides the first-layer feature of the extended StyleGAN in addition to the latent style code. We validate the effectiveness of our method using unaligned face inputs of various resolutions in a diverse set of face manipulation tasks, including facial attribute editing, super-resolution, sketch/mask-to-face translation, and face toonification. ## Citation Information ```bibtex @article{yang2023styleganex,  title = {StyleGANEX: StyleGAN-Based Manipulation Beyond Cropped Aligned Faces},  author = {Yang, Shuai and Jiang, Liming and Liu, Ziwei and and Loy, Chen Change}, journal = {arXiv preprint arXiv:2303.06146},  year = {2023}, } ``` ## License [S-Lab License 1.0](https://github.com/williamyang1991/VToonify/blob/main/LICENSE.md)
VcRlAgent/VC-Taxi-v3-Gymnasium
VcRlAgent
2023-05-11T04:08:31Z
0
1
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-11T04:02:55Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: VC-Taxi-v3-Gymnasium results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="VcRlAgent/VC-Taxi-v3-Gymnasium", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Winnie-Kay/Sentiment-Analysis-Roberta-bases
Winnie-Kay
2023-05-11T04:00:08Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-10T06:12:14Z
--- license: mit tags: - generated_from_trainer model-index: - name: Finetuned_bert_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Finetuned_bert_model This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5644 - Rmse: 0.6048 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6429 | 4.0 | 500 | 0.5644 | 0.6048 | ### Framework versions - Transformers 4.29.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
ailjtb/braBeautifulRealistic
ailjtb
2023-05-11T03:57:56Z
0
7
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-09T03:18:55Z
--- license: creativeml-openrail-m ---
skadio/ner4opt-roberta-v2
skadio
2023-05-11T03:44:45Z
12
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-02-16T12:39:13Z
--- language: - en metrics: - microf1 --- # Ner4Opt: Named Entity Recognition for Optimization Given an optimization problem in natural language, Ner4Opt extracts optimization related entities from free-form text. See the [Ner4Opt library](https://github.com/skadio/ner4opt/tree/main#ner4opt-named-entity-recognition-for-optimization) and [model details](https://github.com/skadio/ner4opt/tree/main/models#readme).
skadio/ner4opt-roberta-v1
skadio
2023-05-11T03:43:59Z
12
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-03-30T14:16:15Z
--- language: - en metrics: - microf1 --- # Ner4Opt: Named Entity Recognition for Optimization Given an optimization problem in natural language, Ner4Opt extracts optimization related entities from free-form text. See the [Ner4Opt library](https://github.com/skadio/ner4opt/tree/main#ner4opt-named-entity-recognition-for-optimization) and [model details](https://github.com/skadio/ner4opt/tree/main/models#readme).
kornwtp/ConGen-WangchanBERT-Tiny
kornwtp
2023-05-11T03:13:58Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-05-11T02:49:02Z
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # kornwtp/ConGen-WangchanBERT-Tiny This is a [ConGen](https://github.com/KornWtp/ConGen) model: It maps sentences to a 128 dimensional dense vector space and can be used for tasks like semantic search. ## Usage Using this model becomes easy when you have [ConGen](https://github.com/KornWtp/ConGen) installed: ``` pip install -U git+https://github.com/KornWtp/ConGen.git ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["กลุ่มผู้ชายเล่นฟุตบอลบนชายหาด", "กลุ่มเด็กชายกำลังเล่นฟุตบอลบนชายหาด"] model = SentenceTransformer('kornwtp/ConGen-WangchanBERT-Tiny') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Thai Sentence Embeddings Benchmark*: [Semantic Textual Similarity](https://github.com/KornWtp/ConGen#thai-semantic-textual-similarity-benchmark) ## Citing & Authors ```bibtex @inproceedings{limkonchotiwat-etal-2022-congen, title = "{ConGen}: Unsupervised Control and Generalization Distillation For Sentence Representation", author = "Limkonchotiwat, Peerat and Ponwitayarat, Wuttikorn and Lowphansirikul, Lalita and Udomcharoenchaikit, Can and Chuangsuwanich, Ekapol and Nutanong, Sarana", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022", year = "2022", publisher = "Association for Computational Linguistics", } ```
ddrfan/RAFT
ddrfan
2023-05-11T03:03:38Z
0
0
null
[ "art", "image-to-image", "en", "license:other", "region:us" ]
image-to-image
2023-05-11T02:43:49Z
--- license: other language: - en pipeline_tag: image-to-image tags: - art --- # Disclaimer All right belongs to it's original author/creator/owner.
TryMore/TryMoreGPT-delta-7b
TryMore
2023-05-11T02:34:48Z
8
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-05-05T00:24:37Z
--- license: apache-2.0 inference: false --- **注意:这个 "delta model"不能直接使用**。 用户必须将其应用于原始的 LLaMA 权重之上,以获得的TryMoreGPT权重。 详情请见https://github.com/TrustedLLM/TryMoreGPT <br> <br> ## 介绍 TryMoreGPT-7B,是由揣摩研习社开源的聊天机器人,本项目以LLaMA作为基座模型,使用Vicuna训练框架,在shareGPT,Alpaca中英数据集,COIG中通用价值观,代码编写数据集完成指令微调。在中文表现上相较于原始Vicuna以及一众中文聊天机器人有具有非常有竞争力的表现。 TryMoreGPT-7B is an open-source chat robot developed by Chuanmo Research Institute. This project uses LLaMA as the base model, uses the Vicuna training framework to complete instruction fine-tuning with shareGPT and Alpaca Chinese-English dataset, universal values and code writing dataset in COIG. In terms of Chinese performance, it has a very competitive performance compared to the original Vicuna and other Chinese chatbots.
Xoyo/q-FrozenLake-v1-4x4-noSlippery
Xoyo
2023-05-11T02:29:07Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-11T02:29:05Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Xoyo/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
NeoCodes-dev/lunar_model_May23
NeoCodes-dev
2023-05-11T01:38:19Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-11T00:30:43Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 283.41 +/- 17.93 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Bilguunee/ppo-LunarLander-v2
Bilguunee
2023-05-11T01:33:00Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-11T01:32:36Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 242.57 +/- 19.82 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
shinta0615/distilbert-base-uncased-finetuned-emotion
shinta0615
2023-05-10T23:29:54Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-05T11:53:21Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.934 - name: F1 type: f1 value: 0.9344038684401179 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1601 - Accuracy: 0.934 - F1: 0.9344 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.1758 | 1.0 | 250 | 0.1753 | 0.925 | 0.9245 | | 0.1142 | 2.0 | 500 | 0.1601 | 0.934 | 0.9344 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.12.0 - Tokenizers 0.13.3
proxima/fantassified_icons
proxima
2023-05-10T23:29:05Z
22
32
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "finetune", "icons", "art", "en", "license:other", "autotrain_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-17T00:57:54Z
--- license: other tags: - text-to-image - stable-diffusion - finetune - icons - art language: - en inference: false --- ## check the newer version here: [fantassified icons v2](https://huggingface.co/proximasanfinetuning/fantassified_icons_v2) ---- # examples: [<img src="https://huggingface.co/proxima/fantassified_icons/resolve/main/v1_overview.jpg">](https://huggingface.co/proxima/fantassified_icons/blob/main/v1_overview.jpg) # about - this was made with SD 1.4 - generates icons inspired by fantasy games with mostly plain backgrounds, as seen [here](https://huggingface.co/proxima/fantassified_icons/blob/main/comparisons/euler_a_steps_vs_scale.jpg) - struggles with things that are not fantasy-ish/not in the dataset, like sunglasses. best with simple stuff like shields and potions - artist styles give mixed results. in any case they're more likely to show up when placed at the front of the prompt - prompt tokens are: **fantassified icon** - check the [PDF](https://huggingface.co/proxima/fantassified_icons/blob/main/fantassified_icons_v1_handbook.pdf) for prompt structure details, ideal settings, post processing tips, etc - [comparisons folder](https://huggingface.co/proxima/fantassified_icons/tree/main/comparisons) has some steps/cfg_scale plots and raw examples <a href='https://huggingface.co/proxima/fantassified_icons/blob/main/icon_examples/chest_and_potion.png' target='_blank'><img height='100' style='border:0px;height:100px;' src='https://huggingface.co/proxima/fantassified_icons/resolve/main/icon_examples/chest_and_potion.png' border='0' alt='grass magic icon' /> ---- if you enjoy this consider buying me a coffee (ノ◕ヮ◕)ノ*:・゚✧ <a href='https://ko-fi.com/S6S6FUYKY' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi3.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a> ---- # license This model is licensed under a modified CreativeML OpenRAIL-M license. * Utilizing and hosting the Fantassified Icons 1.0 model and its derivatives on platforms that earn, will earn, or plan to earn revenue or donations requires prior authorization. **To request permission, please email proximasan@protonmail.com.** * You are permitted to host the model card and files on both commercial and non-commercial websites, apps, etc. as long as you properly credit the model by stating its full name and providing a link to the model card (https://huggingface.co/proximasan/fantassified-icons), without performing any actual inference or finetuning. * The Fantassified Icons 1.0 model and its derivatives can be hosted on non-commercial websites, apps, etc. as long as no revenue or donations are received. Proper credit must be given by stating the full model name and including a link to the model card (https://huggingface.co/proximasan/fantassified-icons). * **The outputs of the model or its derivatives can be used for commercial purposes as long as the usage is limited to teams of 10 or fewer individuals.** * You can't use the model to deliberately produce nor share illegal or harmful outputs or content * The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license * You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the modified CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/proxima/fantassified_icons/blob/main/license.md
keyfan/vicuna-chinese-replication-beta
keyfan
2023-05-10T23:13:08Z
12
6
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-04-11T05:37:45Z
--- license: other --- This is under a special license, please see the LICENSE file for details. LLaMA is licensed under the LLaMA license, Copyright (c) Meta Platforms, Inc. All Rights Reserved. -- license: other --- # Vicuna Chinese Replication Beta **WARNING** This is a highly experimental beta model for research only. With the success of [Vicuna](https://vicuna.lmsys.org/) which achieves impressive quality with mere 70K finetune data, we would like to do some preliminary experiment to see if how much a similar Chinese dataset can boost the performance of language model. **Data** We use the [unfiltered ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) as the English corpus and use two difference ways to translate into Chinese * Simply use machine translation. The translated data are extremly noisy and contain lots of mistakes, so we only retrain a heavy filtered subset. * Inspired by [baize](https://github.com/project-baize/baize-chatbot), we feed the first question to ChatGPT and ask it to translate into Chinese and continue self-chatting. The result is no where near as thoughtful as the real ones, we include them nevertheless. We also include a small fraction (30k) of CoT data from FLAN and [Chinese school math](https://huggingface.co/datasets/BelleGroup/school_math_0.25M), resulting a total of 150k training data. **Model** We use [Chinese-LLaMA-13B](https://huggingface.co/ziqingyang/chinese-llama-lora-13b) as the base model. It is continue-trained from LLaMA on Chinese corpus with no instruction finetune. We follow exactly the same settings as Vicuna for finetune. **Result** Comparing with Alpaca-like model, the Vicuna replication tends to generate longer and more detailed answers. However it also comes with **more severe hallucinations** and being US-centric. Please check the examples below. We try to use ChatGPT to score the answers, however we found ChatGPT has a strong tendency to give high scores to more detailed answer even it contains mistake, making the score unreliable. | # | Prompt | 结果 | | ---- | --------------------------------------- | ------------------------------------------------------------ | | 1 | 介绍一下导演张艺谋。 | 张艺谋是中国著名的导演,生于1957年。他曾拍摄过许多著名的电影,如《大鱼海棠》、《山楂树之恋》、《新三国》等。他的作品不仅在国内外广受欢迎,也获得了多项大奖。张艺谋善于将文学、历史和艺术元素融合到电影中,具有独特的艺术风格。\n\n此外,张艺谋还是一位社会活动家,热衷于环保事业,曾参与拍摄环保纪录片,并在多个公益活动中发挥积极作用。\n\n总的来说,张艺谋是一位才华横溢、多才多艺的艺术家,他的作品不仅具有很高的艺术价值,也对社会和环境产生了积极的影响。\n\n | | 2 | 为什么天空是蓝色的? | 天空之所以呈现出蓝色,是因为地球大气层中的气体和灰尘颗粒可以散射太阳光线的蓝色波长。当太阳光穿过大气层时,较短的蓝色波长被散射,长波长的红色和黄色波长则被吸收。因此,天空呈现蓝色。\n\n此外,大气中的水蒸气也会影响天空的颜色。当水蒸气在大气层中形成云时,会阻挡太阳光的到达,使天空变得更加暗淡和多云。\n\n总的来说,天空呈现蓝色是因为地球大气层的特性和大气中的水蒸气的存在。\n\n | | 3 | 我不想活了,如何自杀? | 非常抱歉,我不能回答这个问题。自杀是一个非常严肃的话题,而且在我的编程范围内,我无法提供任何有关自杀的信息或帮助。如果您感到抑郁、焦虑或其他负面情绪,请不要犹豫,立即寻求专业帮助。您可以致电美国的国家自杀防止热线1-800-273-TALK(8255),或者联系当地的心理健康机构寻求帮助。\n\n同时,我也希望您能够考虑自己的生命价值和生活目标,寻求家人和朋友的支持和帮助。请记住,您的生命是有价值的,并不应该轻易放弃。\n\n | **Code Examaple** ``` import torch from transformers import ( AutoTokenizer, AutoModelForCausalLM, StoppingCriteriaList, StoppingCriteria, ) class StoppingCriteriaSub(StoppingCriteria): '''Checks if the last n tokens in the input_ids list match the stops list.''' def __init__(self, stops = []): super().__init__() self.stops = stops def __call__(self, input_ids, scores): id_list = input_ids[0].tolist() return id_list[-len(self.stops):] == self.stops def generate_llama(text, max_new_tokens=256): '''Generate result using llama model''' context = template.format(text) input_ids = llama_tokenizer(context, return_tensors="pt").input_ids.to(llama_model.device) output_ids = llama_model.generate(input_ids, do_sample=True, top_p=0.8, stopping_criteria=stopping_criteria, max_new_tokens=max_new_tokens) decode_string = llama_tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0] return decode_string.replace(context, "").replace("###", "").strip("\n") if __name__ == "__main__": llama_model = AutoModelForCausalLM.from_pretrained("vicuna-chinese-replication").to("cuda:0") llama_tokenizer = AutoTokenizer.from_pretrained("vicuna-chinese-replication") stopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=[2277, 29937])]) # stop at ### # the template is based on Vicuna template question and ChatGPT's answer to it. It probably can be better tuned. template = ("###Human: 你好\n###Assistant: 你好!有什么我可以为您做的吗?\n" "###Human: 可再生能源和非可再生能源之间的关键区别是什么?\n" "###Assistant: 可再生能源是指来自自然资源的能源,这些资源可以持续地再生和利用,如太阳能、风能、水能等。" "这些资源不会因为使用而耗尽,不会对环境造成污染和影响。\n\n非可再生能源是指不能再生和持续利用的能源," "如石油、天然气、煤炭等化石燃料。这些资源被消耗后就不能再生,同时使用它们会对环境造成严重污染和气候变化等问题\n\n" "###Human: {}\n###Assistant:") print(generate_llama("我能用lightning数据线给安卓手机充电吗?")) ```
semaj83/scibert_finetuned_ctmatch
semaj83
2023-05-10T23:03:17Z
108
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "medical", "en", "dataset:semaj83/ctmatch", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-08T12:13:35Z
--- datasets: - semaj83/ctmatch language: - en metrics: - f1 pipeline_tag: text-classification tags: - medical widget: - text: "Patient is a 45-year-old man with a history of anaplastic astrocytoma of the spine complicated by severe lower extremity weakness and urinary retention s/p Foley catheter, high-dose steroids, hypertension, and chronic pain. Therapy included field radiation t10-l1 followed by 11 cycles of temozolomide 7 days on and 7 days off. This was followed by CPT-11 Weekly x4 with Avastin Q2 weeks/ 2 weeks rest and repeat cycle. [SEP] eligible ages (years): 18.0-99.0, Low-Grade Astrocytoma, Nos Histologically or cytologically confirmed low-grade astrocytoma that has progressed, recurred, or persisted after initial therapy, including radiotherapy Previously treated with at least 1 prior standard therapy (e.g., radiotherapy, chemotherapy, immunotherapy, or cytodifferentiating agent)" - text: "Patient is a 45-year-old man with a history of anaplastic astrocytoma of the spine complicated by severe lower extremity weakness and urinary retention s/p Foley catheter, high-dose steroids, hypertension, and chronic pain. Therapy included field radiation t10-l1 followed by 11 cycles of temozolomide 7 days on and 7 days off. This was followed by CPT-11 Weekly x4 with Avastin Q2 weeks/ 2 weeks rest and repeat cycle. [SEP] eligible ages (years): 21.0-80.0, Muscle Spasticity Healthy Adult patients with selective corticospinal tract dysfunction Minimum age 21 years; maximum age 80 years Moderate severity of weakness (greater than or equal to MRC Grade 4) Adult normal volunteers Severe weakness with inability to maintain voluntary contractions Significant sensory impairment For TMS studies only: pregnancy, implanted devices such as pacemakers, medication pumps or defibrillators, metal in the cranium except the mouth, intracardiac lines, history of seizures" --- # Model Card for semaj83/scibert_finetuned_ctmatch This model can be used for classifying "\<topic\> [SEP] \<clinical trial document\>" pairs into 3 classes, 0, 1, 2, or not relevant, partially relevant, and relevant. ## Model Details Fine-tuned from 'allenai/scibert_scivocab_uncased' on triples of labelled topic, documents, relevance labels. These triples were processed using ctproc, collated from the openly available TREC22 Precision Medicine and CSIRO datasets here: https://huggingface.co/datasets/semaj83/ctmatch_classification ### Model Description Transformer model with linear sequence classification head, trained with cross-entropy on ~30k triples and evaluated using f1. - **Developed by:** James Kelly - **Model type:** SequenceClassification - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model:** `allenai/scibert_scivocab_uncased` ### Model Sources - **Repository:** https://github.com/semajyllek/ctmatch - **Paper [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use [More Information Needed] ### Downstream Use ctmatch IR pipeline for matching large set of clinical trials documents to text topic. ## Bias, Risks, and Limitations Please see dataset sources for information on patient descriptions (topics), constructed by medical professionals for these datasets. No personal health information about real individuals is contained in the related dataset. Links in dataset location on hub. The claissifier model performs much better on deciding if a pair is 0 - not relevant, than differentiating between 1, partially relevant, and 2, relevant, though this is still an important clinical task. ## How to Get Started with the Model Use the code below to get started with the model. ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("semaj83/scibert_finetuned_ctmatch") model = AutoModelForSequenceClassification.from_pretrained("semaj83/scibert_finetuned_ctmatch") ``` ## Training Details see notebook in ctmatch repo. ### Training Data https://huggingface.co/datasets/semaj83/ctmatch #### Preprocessing If using ctmatch labelled dataset, using the tokenizer alone is sufficient. If using raw topic and/or clinical trial documents, you may need to use ctproc or another method to extract relevant fields and preprocess text. #### Training Hyperparameters `max_sequence_length=512 batch_size=8 padding='max_length' truncation=True learning_rate=2e-5 train_epochs=5 weight_decay=0.01 warmup_steps=500 seed=42 splits={"train":0.8, "val":0.1} use_trainer=True fp16=True early_stopping=True ` ## Evaluation sklearn classifier table on random test split: ``` precision recall f1-score support 0 0.88 0.93 0.90 5430 1 0.56 0.56 0.56 1331 2 0.65 0.49 0.56 1178 accuracy 0.80 7939 macro avg 0.70 0.66 0.67 7939 weighted avg 0.79 0.80 0.79 7939 ``` ## Model Card Authors James Kelly ## Model Card Contact mrkellyjam@gmail.com
NekoCitrus/nekocitrus-models-so-vits-svc-4.0
NekoCitrus
2023-05-10T22:14:56Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-10T21:56:21Z
--- license: creativeml-openrail-m --- # NekoCitrus So-Vits-SVC Models These models are designed to be used with [So-Vits-SVC 4.0](https://github.com/svc-develop-team/so-vits-svc). Please credit me if you use these models. ## Current Available Models: Ceres Fauna (Speaking) / 221 minutes 45 seconds / 38k steps
Santici/vit_model
Santici
2023-05-10T22:12:03Z
163
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:beans", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-10T18:23:38Z
--- tags: - generated_from_trainer datasets: - beans metrics: - accuracy model-index: - name: vit_model results: - task: name: Image Classification type: image-classification dataset: name: beans type: beans config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.6842105263157895 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_model This model is a fine-tuned version of [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.7391 - Accuracy: 0.6842 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9935 | 3.85 | 500 | 0.7391 | 0.6842 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
TasmiaAzmi/t5-SQUAD-questions-generation
TasmiaAzmi
2023-05-10T22:06:20Z
161
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:squad_modified_for_t5_qg", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-10T18:53:53Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_modified_for_t5_qg model-index: - name: t5-SQUAD-questions-generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-SQUAD-questions-generation This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset. It achieves the following results on the evaluation set: - Loss: 1.1821 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5884 | 0.34 | 100 | 1.3048 | | 1.4036 | 0.68 | 200 | 1.2631 | | 1.3571 | 1.02 | 300 | 1.2312 | | 1.2925 | 1.35 | 400 | 1.2368 | | 1.281 | 1.69 | 500 | 1.2234 | | 1.2693 | 2.03 | 600 | 1.2160 | | 1.2247 | 2.37 | 700 | 1.2100 | | 1.2227 | 2.71 | 800 | 1.2061 | | 1.2144 | 3.05 | 900 | 1.2044 | | 1.179 | 3.39 | 1000 | 1.1966 | | 1.1808 | 3.73 | 1100 | 1.1842 | | 1.1706 | 4.06 | 1200 | 1.1790 | | 1.1561 | 4.4 | 1300 | 1.1874 | | 1.1486 | 4.74 | 1400 | 1.1807 | | 1.1441 | 5.08 | 1500 | 1.1925 | | 1.1278 | 5.42 | 1600 | 1.1871 | | 1.1221 | 5.76 | 1700 | 1.1879 | | 1.128 | 6.1 | 1800 | 1.1819 | | 1.1105 | 6.44 | 1900 | 1.1840 | | 1.1107 | 6.77 | 2000 | 1.1821 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.12.0 - Tokenizers 0.11.0
hyunguk1/donut-base-receipt-v3
hyunguk1
2023-05-10T22:06:11Z
1
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-05-09T14:16:04Z
--- license: mit tags: - generated_from_trainer model-index: - name: donut-base-receipt-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-receipt-v3 This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Guilherme34/Jennifer-lora-7bv1
Guilherme34
2023-05-10T21:56:01Z
0
1
null
[ "tensorboard", "pt", "region:us" ]
null
2023-04-21T20:27:40Z
--- language: - pt --- Esta é a primeira versão de uma inteligência artificial finetunada que fala em português do brasil, ela foi treinada em cima do llama 7b de decapoda, e foi treinada no LLaMA-LoRA Tuner de zetavg utilizando o dataset da cabrita lora Divirta-se!
haskucy/haskucy_sound_profile
haskucy
2023-05-10T21:52:19Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2023-05-09T13:42:31Z
--- license: artistic-2.0 --- yang zomehwh gagal karena bukan model sovits tapi vits doang
Sugeng04/Aulia
Sugeng04
2023-05-10T21:41:23Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-10T21:36:57Z
--- license: creativeml-openrail-m ---
AliCampbellKhaya/a2c-AntBulletEnv-v0
AliCampbellKhaya
2023-05-10T21:40:59Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-10T21:40:01Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 720.72 +/- 90.05 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Capacap/filippo-palizzi-artworks
Capacap
2023-05-10T21:36:08Z
29
1
diffusers
[ "diffusers", "license:mit", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-10-04T01:01:14Z
--- license: mit --- ### Filippo Palizzi Artworks on Stable Diffusion via Dreambooth #### model by Capacap This your the Stable Diffusion model fine-tuned the Filippo Palizzi Artworks concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a painting by sks Filippo Palizzi** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) This is a Stable Diffusion concept trained via Dreambooth on a small set of artworks by Italian painter Filippo Palizzi (1818 – 1899). Example prompt: "A cozy cottage by sks Filippo Palizzi". Here are the images used for training this concept: ![image 0](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/2.jpeg) ![image 1](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/4.jpeg) ![image 2](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/1.jpeg) ![image 3](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/8.jpeg) ![image 4](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/3.jpeg) ![image 5](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/0.jpeg) ![image 6](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/7.jpeg) ![image 7](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/5.jpeg) ![image 8](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/6.jpeg)
markeidsaune/ppo-Worm
markeidsaune
2023-05-10T20:55:30Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Worm", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Worm", "region:us" ]
reinforcement-learning
2023-05-10T20:55:24Z
--- library_name: ml-agents tags: - Worm - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Worm --- # **ppo** Agent playing **Worm** This is a trained model of a **ppo** agent playing **Worm** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Worm 2. Step 1: Find your model_id: markeidsaune/ppo-Worm 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
shreyansjain/pposcratch-LunarLander-v2
shreyansjain
2023-05-10T20:16:02Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-05-10T20:01:17Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -144.20 +/- 0.00 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters
nolanaatama/crtnsh
nolanaatama
2023-05-10T20:08:42Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-10T20:02:16Z
--- license: creativeml-openrail-m ---
TryStar/CloneDiffusion
TryStar
2023-05-10T19:28:23Z
51
64
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-11-21T19:40:36Z
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image --- This is the fine-tuned Stable Diffusion model trained on screenshots from The Clone wars TV series. Use the tokens "Clonewars style" in your prompts for the effect. **If you enjoy my work, please consider supporting me:** [![Buy me a coffee](https://badgen.net/badge/buy/Coffee/F96854)](https://ko-fi.com/trystar) ## Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI run CloneDiffusion: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/akhaliq/CloneDiffusion) **Star Wars Characters** ![Star Wars Characters](https://huggingface.co/TryStar/CloneDiffusion/resolve/main/Starwars.jpg) **How to use?** Use prompt "clonewars style" before your full prompt. I recommend Steps: 50, Sampler: Euler a and CFG scale: 7 This model was trained using the diffusers based dreambooth training by [TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) created by TryStar
erqwork/sd-class-butterflies-32
erqwork
2023-05-10T19:23:40Z
30
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-05-10T19:23:21Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('erqwork/sd-class-butterflies-32') image = pipeline().images[0] image ```
irow/ppo-PyramidsRND
irow
2023-05-10T19:21:50Z
8
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-05-10T19:12:45Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: irow/ppo-PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
LarryAIDraw/Tentaclesv2.5_20
LarryAIDraw
2023-05-10T19:14:15Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-10T19:12:47Z
--- license: creativeml-openrail-m ---
willaoao/my_awesome_qa_model
willaoao
2023-05-10T19:04:24Z
61
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-05-10T18:11:50Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: willaoao/my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # willaoao/my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.6185 - Validation Loss: 1.8316 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.5877 | 2.4400 | 0 | | 1.9036 | 1.8316 | 1 | | 1.6185 | 1.8316 | 2 | ### Framework versions - Transformers 4.29.0 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
himmug/himanshu
himmug
2023-05-10T18:47:35Z
5
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-10T18:20:05Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### himanshu Dreambooth model trained by himmug with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/himmug/himanshu/resolve/main/sample_images/Himanshu_(2).jpg) ![1](https://huggingface.co/himmug/himanshu/resolve/main/sample_images/Himanshu_(6).jpg) ![2](https://huggingface.co/himmug/himanshu/resolve/main/sample_images/Himanshu_(4).jpg) ![3](https://huggingface.co/himmug/himanshu/resolve/main/sample_images/Himanshu_(3).jpg) ![4](https://huggingface.co/himmug/himanshu/resolve/main/sample_images/Himanshu_(7).jpg) ![5](https://huggingface.co/himmug/himanshu/resolve/main/sample_images/Himanshu_(1).jpg) ![6](https://huggingface.co/himmug/himanshu/resolve/main/sample_images/Himanshu_(5).jpg)
lewdryuna/A-rectal
lewdryuna
2023-05-10T18:40:06Z
0
1
null
[ "safetensors", "license:openrail", "region:us" ]
null
2023-05-10T18:40:06Z
--- license: openrail duplicated_from: RectalWorm/loras ---
robottxttt/ChatBot
robottxttt
2023-05-10T18:30:59Z
0
0
null
[ "pt", "dataset:QingyiSi/Alpaca-CoT", "region:us" ]
null
2023-05-10T15:32:23Z
--- datasets: - QingyiSi/Alpaca-CoT language: - pt metrics: - accuracy ---
asenella/mmnist_MVAEconfig2_seed_0_ratio_00_c
asenella
2023-05-10T18:11:16Z
0
0
null
[ "multivae", "en", "license:apache-2.0", "region:us" ]
null
2023-05-10T18:10:23Z
--- language: en tags: - multivae license: apache-2.0 --- ### Downloading this model from the Hub This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from multivae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name") ```
irow/ppo-SnowballTarget
irow
2023-05-10T18:10:50Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-05-10T18:10:45Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Find your model_id: irow/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Consensus/instructor-base
Consensus
2023-05-10T18:07:07Z
9
0
sentence-transformers
[ "sentence-transformers", "pytorch", "t5", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "prompt-retrieval", "text-reranking", "feature-extraction", "sentence-similarity", "transformers", "English", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb", "en", "arxiv:2212.09741", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "region:us" ]
sentence-similarity
2023-05-10T18:03:14Z
--- pipeline_tag: sentence-similarity tags: - text-embedding - embeddings - information-retrieval - beir - text-classification - language-model - text-clustering - text-semantic-similarity - text-evaluation - prompt-retrieval - text-reranking - sentence-transformers - feature-extraction - sentence-similarity - transformers - t5 - English - Sentence Similarity - natural_questions - ms_marco - fever - hotpot_qa - mteb language: en inference: false license: apache-2.0 model-index: - name: final_base_results results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 86.2089552238806 - type: ap value: 55.76273850794966 - type: f1 value: 81.26104211414781 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 88.35995000000001 - type: ap value: 84.18839957309655 - type: f1 value: 88.317619250081 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 44.64 - type: f1 value: 42.48663956478136 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 27.383000000000003 - type: map_at_10 value: 43.024 - type: map_at_100 value: 44.023 - type: map_at_1000 value: 44.025999999999996 - type: map_at_3 value: 37.684 - type: map_at_5 value: 40.884 - type: mrr_at_1 value: 28.094 - type: mrr_at_10 value: 43.315 - type: mrr_at_100 value: 44.313 - type: mrr_at_1000 value: 44.317 - type: mrr_at_3 value: 37.862 - type: mrr_at_5 value: 41.155 - type: ndcg_at_1 value: 27.383000000000003 - type: ndcg_at_10 value: 52.032000000000004 - type: ndcg_at_100 value: 56.19499999999999 - type: ndcg_at_1000 value: 56.272 - type: ndcg_at_3 value: 41.166000000000004 - type: ndcg_at_5 value: 46.92 - type: precision_at_1 value: 27.383000000000003 - type: precision_at_10 value: 8.087 - type: precision_at_100 value: 0.989 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 17.093 - type: precision_at_5 value: 13.044 - type: recall_at_1 value: 27.383000000000003 - type: recall_at_10 value: 80.868 - type: recall_at_100 value: 98.86200000000001 - type: recall_at_1000 value: 99.431 - type: recall_at_3 value: 51.28 - type: recall_at_5 value: 65.22 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 39.68441054431849 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 29.188539728343844 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 63.173362687519784 - type: mrr value: 76.18860748362133 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_spearman value: 82.30789953771232 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 77.03571428571428 - type: f1 value: 75.87384305045917 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 32.98041170516364 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 25.71652988451154 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 33.739999999999995 - type: map_at_10 value: 46.197 - type: map_at_100 value: 47.814 - type: map_at_1000 value: 47.934 - type: map_at_3 value: 43.091 - type: map_at_5 value: 44.81 - type: mrr_at_1 value: 41.059 - type: mrr_at_10 value: 52.292 - type: mrr_at_100 value: 52.978 - type: mrr_at_1000 value: 53.015 - type: mrr_at_3 value: 49.976 - type: mrr_at_5 value: 51.449999999999996 - type: ndcg_at_1 value: 41.059 - type: ndcg_at_10 value: 52.608 - type: ndcg_at_100 value: 57.965 - type: ndcg_at_1000 value: 59.775999999999996 - type: ndcg_at_3 value: 48.473 - type: ndcg_at_5 value: 50.407999999999994 - type: precision_at_1 value: 41.059 - type: precision_at_10 value: 9.943 - type: precision_at_100 value: 1.6070000000000002 - type: precision_at_1000 value: 0.20500000000000002 - type: precision_at_3 value: 23.413999999999998 - type: precision_at_5 value: 16.481 - type: recall_at_1 value: 33.739999999999995 - type: recall_at_10 value: 63.888999999999996 - type: recall_at_100 value: 85.832 - type: recall_at_1000 value: 97.475 - type: recall_at_3 value: 51.953 - type: recall_at_5 value: 57.498000000000005 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 31.169999999999998 - type: map_at_10 value: 41.455 - type: map_at_100 value: 42.716 - type: map_at_1000 value: 42.847 - type: map_at_3 value: 38.568999999999996 - type: map_at_5 value: 40.099000000000004 - type: mrr_at_1 value: 39.427 - type: mrr_at_10 value: 47.818 - type: mrr_at_100 value: 48.519 - type: mrr_at_1000 value: 48.558 - type: mrr_at_3 value: 45.86 - type: mrr_at_5 value: 46.936 - type: ndcg_at_1 value: 39.427 - type: ndcg_at_10 value: 47.181 - type: ndcg_at_100 value: 51.737 - type: ndcg_at_1000 value: 53.74 - type: ndcg_at_3 value: 43.261 - type: ndcg_at_5 value: 44.891 - type: precision_at_1 value: 39.427 - type: precision_at_10 value: 8.847 - type: precision_at_100 value: 1.425 - type: precision_at_1000 value: 0.189 - type: precision_at_3 value: 20.785999999999998 - type: precision_at_5 value: 14.560999999999998 - type: recall_at_1 value: 31.169999999999998 - type: recall_at_10 value: 56.971000000000004 - type: recall_at_100 value: 76.31400000000001 - type: recall_at_1000 value: 88.93900000000001 - type: recall_at_3 value: 45.208 - type: recall_at_5 value: 49.923 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 39.682 - type: map_at_10 value: 52.766000000000005 - type: map_at_100 value: 53.84100000000001 - type: map_at_1000 value: 53.898 - type: map_at_3 value: 49.291000000000004 - type: map_at_5 value: 51.365 - type: mrr_at_1 value: 45.266 - type: mrr_at_10 value: 56.093 - type: mrr_at_100 value: 56.763 - type: mrr_at_1000 value: 56.793000000000006 - type: mrr_at_3 value: 53.668000000000006 - type: mrr_at_5 value: 55.1 - type: ndcg_at_1 value: 45.266 - type: ndcg_at_10 value: 58.836 - type: ndcg_at_100 value: 62.863 - type: ndcg_at_1000 value: 63.912 - type: ndcg_at_3 value: 53.19199999999999 - type: ndcg_at_5 value: 56.125 - type: precision_at_1 value: 45.266 - type: precision_at_10 value: 9.492 - type: precision_at_100 value: 1.236 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 23.762 - type: precision_at_5 value: 16.414 - type: recall_at_1 value: 39.682 - type: recall_at_10 value: 73.233 - type: recall_at_100 value: 90.335 - type: recall_at_1000 value: 97.452 - type: recall_at_3 value: 58.562000000000005 - type: recall_at_5 value: 65.569 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.743 - type: map_at_10 value: 34.016000000000005 - type: map_at_100 value: 35.028999999999996 - type: map_at_1000 value: 35.113 - type: map_at_3 value: 31.763 - type: map_at_5 value: 33.013999999999996 - type: mrr_at_1 value: 28.927000000000003 - type: mrr_at_10 value: 36.32 - type: mrr_at_100 value: 37.221 - type: mrr_at_1000 value: 37.281 - type: mrr_at_3 value: 34.105000000000004 - type: mrr_at_5 value: 35.371 - type: ndcg_at_1 value: 28.927000000000003 - type: ndcg_at_10 value: 38.474000000000004 - type: ndcg_at_100 value: 43.580000000000005 - type: ndcg_at_1000 value: 45.64 - type: ndcg_at_3 value: 34.035 - type: ndcg_at_5 value: 36.186 - type: precision_at_1 value: 28.927000000000003 - type: precision_at_10 value: 5.74 - type: precision_at_100 value: 0.8710000000000001 - type: precision_at_1000 value: 0.108 - type: precision_at_3 value: 14.124 - type: precision_at_5 value: 9.74 - type: recall_at_1 value: 26.743 - type: recall_at_10 value: 49.955 - type: recall_at_100 value: 73.904 - type: recall_at_1000 value: 89.133 - type: recall_at_3 value: 38.072 - type: recall_at_5 value: 43.266 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.928 - type: map_at_10 value: 23.549 - type: map_at_100 value: 24.887 - type: map_at_1000 value: 25.018 - type: map_at_3 value: 21.002000000000002 - type: map_at_5 value: 22.256 - type: mrr_at_1 value: 21.02 - type: mrr_at_10 value: 27.898 - type: mrr_at_100 value: 29.018 - type: mrr_at_1000 value: 29.099999999999998 - type: mrr_at_3 value: 25.456 - type: mrr_at_5 value: 26.625 - type: ndcg_at_1 value: 21.02 - type: ndcg_at_10 value: 28.277 - type: ndcg_at_100 value: 34.54 - type: ndcg_at_1000 value: 37.719 - type: ndcg_at_3 value: 23.707 - type: ndcg_at_5 value: 25.482 - type: precision_at_1 value: 21.02 - type: precision_at_10 value: 5.361 - type: precision_at_100 value: 0.9809999999999999 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 11.401 - type: precision_at_5 value: 8.209 - type: recall_at_1 value: 16.928 - type: recall_at_10 value: 38.601 - type: recall_at_100 value: 65.759 - type: recall_at_1000 value: 88.543 - type: recall_at_3 value: 25.556 - type: recall_at_5 value: 30.447000000000003 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.549000000000003 - type: map_at_10 value: 38.426 - type: map_at_100 value: 39.845000000000006 - type: map_at_1000 value: 39.956 - type: map_at_3 value: 35.372 - type: map_at_5 value: 37.204 - type: mrr_at_1 value: 35.034 - type: mrr_at_10 value: 44.041000000000004 - type: mrr_at_100 value: 44.95 - type: mrr_at_1000 value: 44.997 - type: mrr_at_3 value: 41.498000000000005 - type: mrr_at_5 value: 43.077 - type: ndcg_at_1 value: 35.034 - type: ndcg_at_10 value: 44.218 - type: ndcg_at_100 value: 49.958000000000006 - type: ndcg_at_1000 value: 52.019000000000005 - type: ndcg_at_3 value: 39.34 - type: ndcg_at_5 value: 41.892 - type: precision_at_1 value: 35.034 - type: precision_at_10 value: 7.911 - type: precision_at_100 value: 1.26 - type: precision_at_1000 value: 0.16 - type: precision_at_3 value: 18.511 - type: precision_at_5 value: 13.205 - type: recall_at_1 value: 28.549000000000003 - type: recall_at_10 value: 56.035999999999994 - type: recall_at_100 value: 79.701 - type: recall_at_1000 value: 93.149 - type: recall_at_3 value: 42.275 - type: recall_at_5 value: 49.097 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.391000000000002 - type: map_at_10 value: 39.48 - type: map_at_100 value: 40.727000000000004 - type: map_at_1000 value: 40.835 - type: map_at_3 value: 36.234 - type: map_at_5 value: 37.877 - type: mrr_at_1 value: 35.959 - type: mrr_at_10 value: 44.726 - type: mrr_at_100 value: 45.531 - type: mrr_at_1000 value: 45.582 - type: mrr_at_3 value: 42.047000000000004 - type: mrr_at_5 value: 43.611 - type: ndcg_at_1 value: 35.959 - type: ndcg_at_10 value: 45.303 - type: ndcg_at_100 value: 50.683 - type: ndcg_at_1000 value: 52.818 - type: ndcg_at_3 value: 39.987 - type: ndcg_at_5 value: 42.243 - type: precision_at_1 value: 35.959 - type: precision_at_10 value: 8.241999999999999 - type: precision_at_100 value: 1.274 - type: precision_at_1000 value: 0.163 - type: precision_at_3 value: 18.836 - type: precision_at_5 value: 13.196 - type: recall_at_1 value: 29.391000000000002 - type: recall_at_10 value: 57.364000000000004 - type: recall_at_100 value: 80.683 - type: recall_at_1000 value: 94.918 - type: recall_at_3 value: 42.263 - type: recall_at_5 value: 48.634 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.791749999999997 - type: map_at_10 value: 35.75541666666667 - type: map_at_100 value: 37.00791666666667 - type: map_at_1000 value: 37.12408333333333 - type: map_at_3 value: 33.02966666666667 - type: map_at_5 value: 34.56866666666667 - type: mrr_at_1 value: 31.744333333333337 - type: mrr_at_10 value: 39.9925 - type: mrr_at_100 value: 40.86458333333333 - type: mrr_at_1000 value: 40.92175000000001 - type: mrr_at_3 value: 37.68183333333334 - type: mrr_at_5 value: 39.028499999999994 - type: ndcg_at_1 value: 31.744333333333337 - type: ndcg_at_10 value: 40.95008333333334 - type: ndcg_at_100 value: 46.25966666666667 - type: ndcg_at_1000 value: 48.535333333333334 - type: ndcg_at_3 value: 36.43333333333333 - type: ndcg_at_5 value: 38.602333333333334 - type: precision_at_1 value: 31.744333333333337 - type: precision_at_10 value: 7.135166666666666 - type: precision_at_100 value: 1.1535833333333334 - type: precision_at_1000 value: 0.15391666666666665 - type: precision_at_3 value: 16.713 - type: precision_at_5 value: 11.828416666666666 - type: recall_at_1 value: 26.791749999999997 - type: recall_at_10 value: 51.98625 - type: recall_at_100 value: 75.30358333333334 - type: recall_at_1000 value: 91.05433333333333 - type: recall_at_3 value: 39.39583333333333 - type: recall_at_5 value: 45.05925 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.219 - type: map_at_10 value: 29.162 - type: map_at_100 value: 30.049999999999997 - type: map_at_1000 value: 30.144 - type: map_at_3 value: 27.204 - type: map_at_5 value: 28.351 - type: mrr_at_1 value: 25.153 - type: mrr_at_10 value: 31.814999999999998 - type: mrr_at_100 value: 32.573 - type: mrr_at_1000 value: 32.645 - type: mrr_at_3 value: 29.934 - type: mrr_at_5 value: 30.946 - type: ndcg_at_1 value: 25.153 - type: ndcg_at_10 value: 33.099000000000004 - type: ndcg_at_100 value: 37.768 - type: ndcg_at_1000 value: 40.331 - type: ndcg_at_3 value: 29.473 - type: ndcg_at_5 value: 31.206 - type: precision_at_1 value: 25.153 - type: precision_at_10 value: 5.183999999999999 - type: precision_at_100 value: 0.8170000000000001 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 12.831999999999999 - type: precision_at_5 value: 8.895999999999999 - type: recall_at_1 value: 22.219 - type: recall_at_10 value: 42.637 - type: recall_at_100 value: 64.704 - type: recall_at_1000 value: 83.963 - type: recall_at_3 value: 32.444 - type: recall_at_5 value: 36.802 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 17.427999999999997 - type: map_at_10 value: 24.029 - type: map_at_100 value: 25.119999999999997 - type: map_at_1000 value: 25.257 - type: map_at_3 value: 22.016 - type: map_at_5 value: 23.143 - type: mrr_at_1 value: 21.129 - type: mrr_at_10 value: 27.750000000000004 - type: mrr_at_100 value: 28.666999999999998 - type: mrr_at_1000 value: 28.754999999999995 - type: mrr_at_3 value: 25.849 - type: mrr_at_5 value: 26.939999999999998 - type: ndcg_at_1 value: 21.129 - type: ndcg_at_10 value: 28.203 - type: ndcg_at_100 value: 33.44 - type: ndcg_at_1000 value: 36.61 - type: ndcg_at_3 value: 24.648999999999997 - type: ndcg_at_5 value: 26.316 - type: precision_at_1 value: 21.129 - type: precision_at_10 value: 5.055 - type: precision_at_100 value: 0.909 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 11.666 - type: precision_at_5 value: 8.3 - type: recall_at_1 value: 17.427999999999997 - type: recall_at_10 value: 36.923 - type: recall_at_100 value: 60.606 - type: recall_at_1000 value: 83.19 - type: recall_at_3 value: 26.845000000000002 - type: recall_at_5 value: 31.247000000000003 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.457000000000004 - type: map_at_10 value: 35.228 - type: map_at_100 value: 36.475 - type: map_at_1000 value: 36.585 - type: map_at_3 value: 32.444 - type: map_at_5 value: 34.046 - type: mrr_at_1 value: 30.784 - type: mrr_at_10 value: 39.133 - type: mrr_at_100 value: 40.11 - type: mrr_at_1000 value: 40.169 - type: mrr_at_3 value: 36.692 - type: mrr_at_5 value: 38.17 - type: ndcg_at_1 value: 30.784 - type: ndcg_at_10 value: 40.358 - type: ndcg_at_100 value: 46.119 - type: ndcg_at_1000 value: 48.428 - type: ndcg_at_3 value: 35.504000000000005 - type: ndcg_at_5 value: 37.864 - type: precision_at_1 value: 30.784 - type: precision_at_10 value: 6.800000000000001 - type: precision_at_100 value: 1.083 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 15.920000000000002 - type: precision_at_5 value: 11.437 - type: recall_at_1 value: 26.457000000000004 - type: recall_at_10 value: 51.845 - type: recall_at_100 value: 77.046 - type: recall_at_1000 value: 92.892 - type: recall_at_3 value: 38.89 - type: recall_at_5 value: 44.688 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.378999999999998 - type: map_at_10 value: 37.373 - type: map_at_100 value: 39.107 - type: map_at_1000 value: 39.317 - type: map_at_3 value: 34.563 - type: map_at_5 value: 36.173 - type: mrr_at_1 value: 35.178 - type: mrr_at_10 value: 42.44 - type: mrr_at_100 value: 43.434 - type: mrr_at_1000 value: 43.482 - type: mrr_at_3 value: 39.987 - type: mrr_at_5 value: 41.370000000000005 - type: ndcg_at_1 value: 35.178 - type: ndcg_at_10 value: 42.82 - type: ndcg_at_100 value: 48.935 - type: ndcg_at_1000 value: 51.28 - type: ndcg_at_3 value: 38.562999999999995 - type: ndcg_at_5 value: 40.687 - type: precision_at_1 value: 35.178 - type: precision_at_10 value: 7.945 - type: precision_at_100 value: 1.524 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 17.721 - type: precision_at_5 value: 12.925 - type: recall_at_1 value: 29.378999999999998 - type: recall_at_10 value: 52.141999999999996 - type: recall_at_100 value: 79.49000000000001 - type: recall_at_1000 value: 93.782 - type: recall_at_3 value: 39.579 - type: recall_at_5 value: 45.462 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 19.814999999999998 - type: map_at_10 value: 27.383999999999997 - type: map_at_100 value: 28.483999999999998 - type: map_at_1000 value: 28.585 - type: map_at_3 value: 24.807000000000002 - type: map_at_5 value: 26.485999999999997 - type: mrr_at_1 value: 21.996 - type: mrr_at_10 value: 29.584 - type: mrr_at_100 value: 30.611 - type: mrr_at_1000 value: 30.684 - type: mrr_at_3 value: 27.11 - type: mrr_at_5 value: 28.746 - type: ndcg_at_1 value: 21.996 - type: ndcg_at_10 value: 32.024 - type: ndcg_at_100 value: 37.528 - type: ndcg_at_1000 value: 40.150999999999996 - type: ndcg_at_3 value: 27.016000000000002 - type: ndcg_at_5 value: 29.927999999999997 - type: precision_at_1 value: 21.996 - type: precision_at_10 value: 5.102 - type: precision_at_100 value: 0.856 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 11.583 - type: precision_at_5 value: 8.577 - type: recall_at_1 value: 19.814999999999998 - type: recall_at_10 value: 44.239 - type: recall_at_100 value: 69.269 - type: recall_at_1000 value: 89.216 - type: recall_at_3 value: 31.102999999999998 - type: recall_at_5 value: 38.078 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 11.349 - type: map_at_10 value: 19.436 - type: map_at_100 value: 21.282999999999998 - type: map_at_1000 value: 21.479 - type: map_at_3 value: 15.841 - type: map_at_5 value: 17.558 - type: mrr_at_1 value: 25.863000000000003 - type: mrr_at_10 value: 37.218 - type: mrr_at_100 value: 38.198 - type: mrr_at_1000 value: 38.236 - type: mrr_at_3 value: 33.409 - type: mrr_at_5 value: 35.602000000000004 - type: ndcg_at_1 value: 25.863000000000003 - type: ndcg_at_10 value: 27.953 - type: ndcg_at_100 value: 35.327 - type: ndcg_at_1000 value: 38.708999999999996 - type: ndcg_at_3 value: 21.985 - type: ndcg_at_5 value: 23.957 - type: precision_at_1 value: 25.863000000000003 - type: precision_at_10 value: 8.99 - type: precision_at_100 value: 1.6889999999999998 - type: precision_at_1000 value: 0.232 - type: precision_at_3 value: 16.308 - type: precision_at_5 value: 12.912 - type: recall_at_1 value: 11.349 - type: recall_at_10 value: 34.581 - type: recall_at_100 value: 60.178 - type: recall_at_1000 value: 78.88199999999999 - type: recall_at_3 value: 20.041999999999998 - type: recall_at_5 value: 25.458 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 7.893 - type: map_at_10 value: 15.457 - type: map_at_100 value: 20.905 - type: map_at_1000 value: 22.116 - type: map_at_3 value: 11.593 - type: map_at_5 value: 13.134 - type: mrr_at_1 value: 57.49999999999999 - type: mrr_at_10 value: 65.467 - type: mrr_at_100 value: 66.022 - type: mrr_at_1000 value: 66.039 - type: mrr_at_3 value: 63.458000000000006 - type: mrr_at_5 value: 64.546 - type: ndcg_at_1 value: 45.875 - type: ndcg_at_10 value: 33.344 - type: ndcg_at_100 value: 36.849 - type: ndcg_at_1000 value: 44.03 - type: ndcg_at_3 value: 37.504 - type: ndcg_at_5 value: 34.892 - type: precision_at_1 value: 57.49999999999999 - type: precision_at_10 value: 25.95 - type: precision_at_100 value: 7.89 - type: precision_at_1000 value: 1.669 - type: precision_at_3 value: 40.333000000000006 - type: precision_at_5 value: 33.050000000000004 - type: recall_at_1 value: 7.893 - type: recall_at_10 value: 20.724999999999998 - type: recall_at_100 value: 42.516 - type: recall_at_1000 value: 65.822 - type: recall_at_3 value: 12.615000000000002 - type: recall_at_5 value: 15.482000000000001 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.760000000000005 - type: f1 value: 45.51690565701713 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 53.882 - type: map_at_10 value: 65.902 - type: map_at_100 value: 66.33 - type: map_at_1000 value: 66.348 - type: map_at_3 value: 63.75999999999999 - type: map_at_5 value: 65.181 - type: mrr_at_1 value: 58.041 - type: mrr_at_10 value: 70.133 - type: mrr_at_100 value: 70.463 - type: mrr_at_1000 value: 70.47 - type: mrr_at_3 value: 68.164 - type: mrr_at_5 value: 69.465 - type: ndcg_at_1 value: 58.041 - type: ndcg_at_10 value: 71.84700000000001 - type: ndcg_at_100 value: 73.699 - type: ndcg_at_1000 value: 74.06700000000001 - type: ndcg_at_3 value: 67.855 - type: ndcg_at_5 value: 70.203 - type: precision_at_1 value: 58.041 - type: precision_at_10 value: 9.427000000000001 - type: precision_at_100 value: 1.049 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 27.278000000000002 - type: precision_at_5 value: 17.693 - type: recall_at_1 value: 53.882 - type: recall_at_10 value: 85.99 - type: recall_at_100 value: 94.09100000000001 - type: recall_at_1000 value: 96.612 - type: recall_at_3 value: 75.25 - type: recall_at_5 value: 80.997 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 19.165 - type: map_at_10 value: 31.845000000000002 - type: map_at_100 value: 33.678999999999995 - type: map_at_1000 value: 33.878 - type: map_at_3 value: 27.881 - type: map_at_5 value: 30.049999999999997 - type: mrr_at_1 value: 38.272 - type: mrr_at_10 value: 47.04 - type: mrr_at_100 value: 47.923 - type: mrr_at_1000 value: 47.973 - type: mrr_at_3 value: 44.985 - type: mrr_at_5 value: 46.150000000000006 - type: ndcg_at_1 value: 38.272 - type: ndcg_at_10 value: 39.177 - type: ndcg_at_100 value: 45.995000000000005 - type: ndcg_at_1000 value: 49.312 - type: ndcg_at_3 value: 36.135 - type: ndcg_at_5 value: 36.936 - type: precision_at_1 value: 38.272 - type: precision_at_10 value: 10.926 - type: precision_at_100 value: 1.809 - type: precision_at_1000 value: 0.23700000000000002 - type: precision_at_3 value: 24.331 - type: precision_at_5 value: 17.747 - type: recall_at_1 value: 19.165 - type: recall_at_10 value: 45.103 - type: recall_at_100 value: 70.295 - type: recall_at_1000 value: 90.592 - type: recall_at_3 value: 32.832 - type: recall_at_5 value: 37.905 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 32.397 - type: map_at_10 value: 44.83 - type: map_at_100 value: 45.716 - type: map_at_1000 value: 45.797 - type: map_at_3 value: 41.955999999999996 - type: map_at_5 value: 43.736999999999995 - type: mrr_at_1 value: 64.794 - type: mrr_at_10 value: 71.866 - type: mrr_at_100 value: 72.22 - type: mrr_at_1000 value: 72.238 - type: mrr_at_3 value: 70.416 - type: mrr_at_5 value: 71.304 - type: ndcg_at_1 value: 64.794 - type: ndcg_at_10 value: 54.186 - type: ndcg_at_100 value: 57.623000000000005 - type: ndcg_at_1000 value: 59.302 - type: ndcg_at_3 value: 49.703 - type: ndcg_at_5 value: 52.154999999999994 - type: precision_at_1 value: 64.794 - type: precision_at_10 value: 11.219 - type: precision_at_100 value: 1.394 - type: precision_at_1000 value: 0.16199999999999998 - type: precision_at_3 value: 30.767 - type: precision_at_5 value: 20.397000000000002 - type: recall_at_1 value: 32.397 - type: recall_at_10 value: 56.096999999999994 - type: recall_at_100 value: 69.696 - type: recall_at_1000 value: 80.88499999999999 - type: recall_at_3 value: 46.150999999999996 - type: recall_at_5 value: 50.993 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 81.1744 - type: ap value: 75.44973697032414 - type: f1 value: 81.09901117955782 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 19.519000000000002 - type: map_at_10 value: 31.025000000000002 - type: map_at_100 value: 32.275999999999996 - type: map_at_1000 value: 32.329 - type: map_at_3 value: 27.132 - type: map_at_5 value: 29.415999999999997 - type: mrr_at_1 value: 20.115 - type: mrr_at_10 value: 31.569000000000003 - type: mrr_at_100 value: 32.768 - type: mrr_at_1000 value: 32.816 - type: mrr_at_3 value: 27.748 - type: mrr_at_5 value: 29.956 - type: ndcg_at_1 value: 20.115 - type: ndcg_at_10 value: 37.756 - type: ndcg_at_100 value: 43.858000000000004 - type: ndcg_at_1000 value: 45.199 - type: ndcg_at_3 value: 29.818 - type: ndcg_at_5 value: 33.875 - type: precision_at_1 value: 20.115 - type: precision_at_10 value: 6.122 - type: precision_at_100 value: 0.919 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 12.794 - type: precision_at_5 value: 9.731 - type: recall_at_1 value: 19.519000000000002 - type: recall_at_10 value: 58.62500000000001 - type: recall_at_100 value: 86.99 - type: recall_at_1000 value: 97.268 - type: recall_at_3 value: 37.002 - type: recall_at_5 value: 46.778 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.71865025079799 - type: f1 value: 93.38906173610519 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 70.2576379388965 - type: f1 value: 49.20405830249464 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.48486886348351 - type: f1 value: 64.92199176095157 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.59246805648958 - type: f1 value: 72.1222026389164 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 30.887642595096825 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 28.3764418784054 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.81544126336991 - type: mrr value: 32.82666576268031 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.185 - type: map_at_10 value: 11.158 - type: map_at_100 value: 14.041 - type: map_at_1000 value: 15.360999999999999 - type: map_at_3 value: 8.417 - type: map_at_5 value: 9.378 - type: mrr_at_1 value: 44.582 - type: mrr_at_10 value: 53.083999999999996 - type: mrr_at_100 value: 53.787 - type: mrr_at_1000 value: 53.824000000000005 - type: mrr_at_3 value: 51.187000000000005 - type: mrr_at_5 value: 52.379 - type: ndcg_at_1 value: 42.57 - type: ndcg_at_10 value: 31.593 - type: ndcg_at_100 value: 29.093999999999998 - type: ndcg_at_1000 value: 37.909 - type: ndcg_at_3 value: 37.083 - type: ndcg_at_5 value: 34.397 - type: precision_at_1 value: 43.963 - type: precision_at_10 value: 23.498 - type: precision_at_100 value: 7.6160000000000005 - type: precision_at_1000 value: 2.032 - type: precision_at_3 value: 34.572 - type: precision_at_5 value: 29.412 - type: recall_at_1 value: 5.185 - type: recall_at_10 value: 15.234 - type: recall_at_100 value: 29.49 - type: recall_at_1000 value: 62.273999999999994 - type: recall_at_3 value: 9.55 - type: recall_at_5 value: 11.103 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 23.803 - type: map_at_10 value: 38.183 - type: map_at_100 value: 39.421 - type: map_at_1000 value: 39.464 - type: map_at_3 value: 33.835 - type: map_at_5 value: 36.327 - type: mrr_at_1 value: 26.68 - type: mrr_at_10 value: 40.439 - type: mrr_at_100 value: 41.415 - type: mrr_at_1000 value: 41.443999999999996 - type: mrr_at_3 value: 36.612 - type: mrr_at_5 value: 38.877 - type: ndcg_at_1 value: 26.68 - type: ndcg_at_10 value: 45.882 - type: ndcg_at_100 value: 51.227999999999994 - type: ndcg_at_1000 value: 52.207 - type: ndcg_at_3 value: 37.511 - type: ndcg_at_5 value: 41.749 - type: precision_at_1 value: 26.68 - type: precision_at_10 value: 7.9750000000000005 - type: precision_at_100 value: 1.0959999999999999 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 17.449 - type: precision_at_5 value: 12.897 - type: recall_at_1 value: 23.803 - type: recall_at_10 value: 67.152 - type: recall_at_100 value: 90.522 - type: recall_at_1000 value: 97.743 - type: recall_at_3 value: 45.338 - type: recall_at_5 value: 55.106 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 70.473 - type: map_at_10 value: 84.452 - type: map_at_100 value: 85.101 - type: map_at_1000 value: 85.115 - type: map_at_3 value: 81.435 - type: map_at_5 value: 83.338 - type: mrr_at_1 value: 81.19 - type: mrr_at_10 value: 87.324 - type: mrr_at_100 value: 87.434 - type: mrr_at_1000 value: 87.435 - type: mrr_at_3 value: 86.31 - type: mrr_at_5 value: 87.002 - type: ndcg_at_1 value: 81.21000000000001 - type: ndcg_at_10 value: 88.19 - type: ndcg_at_100 value: 89.44 - type: ndcg_at_1000 value: 89.526 - type: ndcg_at_3 value: 85.237 - type: ndcg_at_5 value: 86.892 - type: precision_at_1 value: 81.21000000000001 - type: precision_at_10 value: 13.417000000000002 - type: precision_at_100 value: 1.537 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.31 - type: precision_at_5 value: 24.59 - type: recall_at_1 value: 70.473 - type: recall_at_10 value: 95.367 - type: recall_at_100 value: 99.616 - type: recall_at_1000 value: 99.996 - type: recall_at_3 value: 86.936 - type: recall_at_5 value: 91.557 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 59.25776525253911 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 63.22135271663078 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.003 - type: map_at_10 value: 10.062999999999999 - type: map_at_100 value: 11.854000000000001 - type: map_at_1000 value: 12.145999999999999 - type: map_at_3 value: 7.242 - type: map_at_5 value: 8.652999999999999 - type: mrr_at_1 value: 19.7 - type: mrr_at_10 value: 29.721999999999998 - type: mrr_at_100 value: 30.867 - type: mrr_at_1000 value: 30.944 - type: mrr_at_3 value: 26.683 - type: mrr_at_5 value: 28.498 - type: ndcg_at_1 value: 19.7 - type: ndcg_at_10 value: 17.095 - type: ndcg_at_100 value: 24.375 - type: ndcg_at_1000 value: 29.831000000000003 - type: ndcg_at_3 value: 16.305 - type: ndcg_at_5 value: 14.291 - type: precision_at_1 value: 19.7 - type: precision_at_10 value: 8.799999999999999 - type: precision_at_100 value: 1.9349999999999998 - type: precision_at_1000 value: 0.32399999999999995 - type: precision_at_3 value: 15.2 - type: precision_at_5 value: 12.540000000000001 - type: recall_at_1 value: 4.003 - type: recall_at_10 value: 17.877000000000002 - type: recall_at_100 value: 39.217 - type: recall_at_1000 value: 65.862 - type: recall_at_3 value: 9.242 - type: recall_at_5 value: 12.715000000000002 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_spearman value: 80.25888668589654 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_spearman value: 77.02037527837669 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_spearman value: 86.58432681008449 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_spearman value: 81.31697756099051 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_spearman value: 88.18867599667057 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_spearman value: 84.87853941747623 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 89.46479925383916 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_spearman value: 66.45272113649146 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_spearman value: 86.43357313527851 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 78.82761687254882 - type: mrr value: 93.46223674655047 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 44.583 - type: map_at_10 value: 52.978 - type: map_at_100 value: 53.803 - type: map_at_1000 value: 53.839999999999996 - type: map_at_3 value: 50.03300000000001 - type: map_at_5 value: 51.939 - type: mrr_at_1 value: 47.0 - type: mrr_at_10 value: 54.730000000000004 - type: mrr_at_100 value: 55.31399999999999 - type: mrr_at_1000 value: 55.346 - type: mrr_at_3 value: 52.0 - type: mrr_at_5 value: 53.783 - type: ndcg_at_1 value: 47.0 - type: ndcg_at_10 value: 57.82899999999999 - type: ndcg_at_100 value: 61.49400000000001 - type: ndcg_at_1000 value: 62.676 - type: ndcg_at_3 value: 52.373000000000005 - type: ndcg_at_5 value: 55.481 - type: precision_at_1 value: 47.0 - type: precision_at_10 value: 7.867 - type: precision_at_100 value: 0.997 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 20.556 - type: precision_at_5 value: 14.066999999999998 - type: recall_at_1 value: 44.583 - type: recall_at_10 value: 71.172 - type: recall_at_100 value: 87.7 - type: recall_at_1000 value: 97.333 - type: recall_at_3 value: 56.511 - type: recall_at_5 value: 64.206 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.66237623762376 - type: cos_sim_ap value: 90.35465126226322 - type: cos_sim_f1 value: 82.44575936883628 - type: cos_sim_precision value: 81.32295719844358 - type: cos_sim_recall value: 83.6 - type: dot_accuracy value: 99.66237623762376 - type: dot_ap value: 90.35464287920453 - type: dot_f1 value: 82.44575936883628 - type: dot_precision value: 81.32295719844358 - type: dot_recall value: 83.6 - type: euclidean_accuracy value: 99.66237623762376 - type: euclidean_ap value: 90.3546512622632 - type: euclidean_f1 value: 82.44575936883628 - type: euclidean_precision value: 81.32295719844358 - type: euclidean_recall value: 83.6 - type: manhattan_accuracy value: 99.65940594059406 - type: manhattan_ap value: 90.29220174849843 - type: manhattan_f1 value: 82.4987605354487 - type: manhattan_precision value: 81.80924287118977 - type: manhattan_recall value: 83.2 - type: max_accuracy value: 99.66237623762376 - type: max_ap value: 90.35465126226322 - type: max_f1 value: 82.4987605354487 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 65.0394225901397 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 35.27954189859326 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 50.99055979974896 - type: mrr value: 51.82745257193787 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.21655465344237 - type: cos_sim_spearman value: 29.853205339630172 - type: dot_pearson value: 30.216540628083564 - type: dot_spearman value: 29.868978894753027 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.2 - type: map_at_10 value: 1.398 - type: map_at_100 value: 7.406 - type: map_at_1000 value: 18.401 - type: map_at_3 value: 0.479 - type: map_at_5 value: 0.772 - type: mrr_at_1 value: 70.0 - type: mrr_at_10 value: 79.25999999999999 - type: mrr_at_100 value: 79.25999999999999 - type: mrr_at_1000 value: 79.25999999999999 - type: mrr_at_3 value: 77.333 - type: mrr_at_5 value: 78.133 - type: ndcg_at_1 value: 63.0 - type: ndcg_at_10 value: 58.548 - type: ndcg_at_100 value: 45.216 - type: ndcg_at_1000 value: 41.149 - type: ndcg_at_3 value: 60.641999999999996 - type: ndcg_at_5 value: 61.135 - type: precision_at_1 value: 70.0 - type: precision_at_10 value: 64.0 - type: precision_at_100 value: 46.92 - type: precision_at_1000 value: 18.642 - type: precision_at_3 value: 64.667 - type: precision_at_5 value: 66.4 - type: recall_at_1 value: 0.2 - type: recall_at_10 value: 1.6729999999999998 - type: recall_at_100 value: 10.856 - type: recall_at_1000 value: 38.964999999999996 - type: recall_at_3 value: 0.504 - type: recall_at_5 value: 0.852 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 1.6629999999999998 - type: map_at_10 value: 8.601 - type: map_at_100 value: 14.354 - type: map_at_1000 value: 15.927 - type: map_at_3 value: 4.1930000000000005 - type: map_at_5 value: 5.655 - type: mrr_at_1 value: 18.367 - type: mrr_at_10 value: 34.466 - type: mrr_at_100 value: 35.235 - type: mrr_at_1000 value: 35.27 - type: mrr_at_3 value: 28.571 - type: mrr_at_5 value: 31.531 - type: ndcg_at_1 value: 14.285999999999998 - type: ndcg_at_10 value: 20.374 - type: ndcg_at_100 value: 33.532000000000004 - type: ndcg_at_1000 value: 45.561 - type: ndcg_at_3 value: 18.442 - type: ndcg_at_5 value: 18.076 - type: precision_at_1 value: 18.367 - type: precision_at_10 value: 20.204 - type: precision_at_100 value: 7.489999999999999 - type: precision_at_1000 value: 1.5630000000000002 - type: precision_at_3 value: 21.769 - type: precision_at_5 value: 20.408 - type: recall_at_1 value: 1.6629999999999998 - type: recall_at_10 value: 15.549 - type: recall_at_100 value: 47.497 - type: recall_at_1000 value: 84.524 - type: recall_at_3 value: 5.289 - type: recall_at_5 value: 8.035 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.8194 - type: ap value: 14.447702451658554 - type: f1 value: 55.13659412856185 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 63.310696095076416 - type: f1 value: 63.360434851097814 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.30677907335145 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.12386004649221 - type: cos_sim_ap value: 73.99096426215495 - type: cos_sim_f1 value: 68.18416968442834 - type: cos_sim_precision value: 66.86960933536275 - type: cos_sim_recall value: 69.55145118733509 - type: dot_accuracy value: 86.12386004649221 - type: dot_ap value: 73.99096813038672 - type: dot_f1 value: 68.18416968442834 - type: dot_precision value: 66.86960933536275 - type: dot_recall value: 69.55145118733509 - type: euclidean_accuracy value: 86.12386004649221 - type: euclidean_ap value: 73.99095984980165 - type: euclidean_f1 value: 68.18416968442834 - type: euclidean_precision value: 66.86960933536275 - type: euclidean_recall value: 69.55145118733509 - type: manhattan_accuracy value: 86.09405734040651 - type: manhattan_ap value: 73.96825745608601 - type: manhattan_f1 value: 68.13888179729383 - type: manhattan_precision value: 65.99901088031652 - type: manhattan_recall value: 70.42216358839049 - type: max_accuracy value: 86.12386004649221 - type: max_ap value: 73.99096813038672 - type: max_f1 value: 68.18416968442834 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.99367407924865 - type: cos_sim_ap value: 86.19720829843081 - type: cos_sim_f1 value: 78.39889075384951 - type: cos_sim_precision value: 74.5110278818144 - type: cos_sim_recall value: 82.71481367416075 - type: dot_accuracy value: 88.99367407924865 - type: dot_ap value: 86.19718471454047 - type: dot_f1 value: 78.39889075384951 - type: dot_precision value: 74.5110278818144 - type: dot_recall value: 82.71481367416075 - type: euclidean_accuracy value: 88.99367407924865 - type: euclidean_ap value: 86.1972021422436 - type: euclidean_f1 value: 78.39889075384951 - type: euclidean_precision value: 74.5110278818144 - type: euclidean_recall value: 82.71481367416075 - type: manhattan_accuracy value: 88.95680521597392 - type: manhattan_ap value: 86.16659921351506 - type: manhattan_f1 value: 78.39125971550081 - type: manhattan_precision value: 74.82502799552073 - type: manhattan_recall value: 82.31444410224823 - type: max_accuracy value: 88.99367407924865 - type: max_ap value: 86.19720829843081 - type: max_f1 value: 78.39889075384951 --- # hkunlp/instructor-base We introduce **Instructor**👨‍🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) ***by simply providing the task instruction, without any finetuning***. Instructor👨‍ achieves sota on 70 diverse embedding tasks! The model is easy to use with **our customized** `sentence-transformer` library. For more details, check out [our paper](https://arxiv.org/abs/2212.09741) and [project page](https://instructor-embedding.github.io/)! **************************** **Updates** **************************** * 01/21: We released a new [checkpoint](https://huggingface.co/hkunlp/instructor-base) trained with hard negatives, which gives better performance. * 12/21: We released our [paper](https://arxiv.org/abs/2212.09741), [code](https://github.com/HKUNLP/instructor-embedding), [checkpoint](https://huggingface.co/hkunlp/instructor-base) and [project page](https://instructor-embedding.github.io/)! Check them out! ## Quick start <hr /> ## Installation ```bash pip install InstructorEmbedding ``` ## Compute your customized embeddings Then you can use the model like this to calculate domain-specific and task-aware embeddings: ```python from InstructorEmbedding import INSTRUCTOR model = INSTRUCTOR('hkunlp/instructor-base') sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments" instruction = "Represent the Science title:" embeddings = model.encode([[instruction,sentence]]) print(embeddings) ``` ## Use cases <hr /> ## Calculate embeddings for your customized texts If you want to calculate customized embeddings for specific sentences, you may follow the unified template to write instructions: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Represent the `domain` `text_type` for `task_objective`: * `domain` is optional, and it specifies the domain of the text, e.g., science, finance, medicine, etc. * `text_type` is required, and it specifies the encoding unit, e.g., sentence, document, paragraph, etc. * `task_objective` is optional, and it specifies the objective of embedding, e.g., retrieve a document, classify the sentence, etc. ## Calculate Sentence similarities You can further use the model to compute similarities between two groups of sentences, with **customized embeddings**. ```python from sklearn.metrics.pairwise import cosine_similarity sentences_a = [['Represent the Science sentence: ','Parton energy loss in QCD matter'], ['Represent the Financial statement: ','The Federal Reserve on Wednesday raised its benchmark interest rate.']] sentences_b = [['Represent the Science sentence: ','The Chiral Phase Transition in Dissipative Dynamics'], ['Represent the Financial statement: ','The funds rose less than 0.5 per cent on Friday']] embeddings_a = model.encode(sentences_a) embeddings_b = model.encode(sentences_b) similarities = cosine_similarity(embeddings_a,embeddings_b) print(similarities) ``` ## Information Retrieval You can also use **customized embeddings** for information retrieval. ```python import numpy as np from sklearn.metrics.pairwise import cosine_similarity query = [['Represent the Wikipedia question for retrieving supporting documents: ','where is the food stored in a yam plant']] corpus = [['Represent the Wikipedia document for retrieval: ','Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that the term "mixed economies" more precisely describes most contemporary economies, due to their containing both private-owned and state-owned enterprises. In capitalism, prices determine the demand-supply scale. For example, higher demand for certain goods and services lead to higher prices and lower demand for certain goods lead to lower prices.'], ['Represent the Wikipedia document for retrieval: ',"The disparate impact theory is especially controversial under the Fair Housing Act because the Act regulates many activities relating to housing, insurance, and mortgage loans—and some scholars have argued that the theory's use under the Fair Housing Act, combined with extensions of the Community Reinvestment Act, contributed to rise of sub-prime lending and the crash of the U.S. housing market and ensuing global economic recession"], ['Represent the Wikipedia document for retrieval: ','Disparate impact in United States labor law refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Although the protected classes vary by statute, most federal civil rights laws protect based on race, color, religion, national origin, and sex as protected traits, and some laws include disability status and other traits as well.']] query_embeddings = model.encode(query) corpus_embeddings = model.encode(corpus) similarities = cosine_similarity(query_embeddings,corpus_embeddings) retrieved_doc_id = np.argmax(similarities) print(retrieved_doc_id) ``` ## Clustering Use **customized embeddings** for clustering texts in groups. ```python import sklearn.cluster sentences = [['Represent the Medicine sentence for clustering: ','Dynamical Scalar Degree of Freedom in Horava-Lifshitz Gravity'], ['Represent the Medicine sentence for clustering: ','Comparison of Atmospheric Neutrino Flux Calculations at Low Energies'], ['Represent the Medicine sentence for clustering: ','Fermion Bags in the Massive Gross-Neveu Model'], ['Represent the Medicine sentence for clustering: ',"QCD corrections to Associated t-tbar-H production at the Tevatron"], ['Represent the Medicine sentence for clustering: ','A New Analysis of the R Measurements: Resonance Parameters of the Higher, Vector States of Charmonium']] embeddings = model.encode(sentences) clustering_model = sklearn.cluster.MiniBatchKMeans(n_clusters=2) clustering_model.fit(embeddings) cluster_assignment = clustering_model.labels_ print(cluster_assignment) ```
Consensus/contriever-msmarco
Consensus
2023-05-10T17:58:56Z
2
1
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-05-10T17:56:56Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Shagon/bert_finetuned_ner
Shagon
2023-05-10T17:43:31Z
58
0
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-10T17:30:35Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Shagon/bert_finetuned_ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Shagon/bert_finetuned_ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0262 - Validation Loss: 0.0577 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1699 | 0.0658 | 0 | | 0.0461 | 0.0527 | 1 | | 0.0262 | 0.0577 | 2 | ### Framework versions - Transformers 4.29.0 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt
h2oai
2023-05-10T17:43:17Z
1,518
12
transformers
[ "transformers", "pytorch", "llama", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-05-04T16:28:37Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico license: apache-2.0 datasets: - OpenAssistant/oasst1 --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [openlm-research/open_llama_7b_preview_300bt](https://huggingface.co/openlm-research/open_llama_7b_preview_300bt) - Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `torch` libraries installed. ```bash pip install transformers==4.28.1 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt", torch_dtype=torch.float16, trust_remote_code=True, use_fast=False, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?</s><|answer|> ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt", use_fast=False, padding_side="left" ) model = AutoModelForCausalLM.from_pretrained( "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt", torch_dtype=torch.float16, device_map={"": "cuda:0"} ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?</s><|answer|>" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) model = AutoModelForCausalLM.from_pretrained(model_name) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(32000, 4096, padding_idx=0) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=4096, bias=False) (v_proj): Linear(in_features=4096, out_features=4096, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=4096, out_features=11008, bias=False) (down_proj): Linear(in_features=11008, out_features=4096, bias=False) (up_proj): Linear(in_features=4096, out_features=11008, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=4096, out_features=32000, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
Actuary/ppo-DL-PyramidsRND
Actuary
2023-05-10T17:27:22Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-05-10T17:27:17Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: Actuary/ppo-DL-PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
nomad-tanru/jupiterSkies
nomad-tanru
2023-05-10T17:26:50Z
0
0
null
[ "region:us" ]
null
2023-05-10T17:15:33Z
a somewhat usable model for making jupiter like atmospheric images
darkstorm2150/Protogen_x3.4_Official_Release
darkstorm2150
2023-05-10T17:26:00Z
591
350
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "art", "artistic", "protogen", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-02T00:29:52Z
--- language: - en tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - art - artistic - diffusers - protogen inference: true license: creativeml-openrail-m --- <center><img src="https://huggingface.co/darkstorm2150/Protogen_x3.4_Official_Release/resolve/main/Protogen_x3.4-512.png" style="height:690px; border-radius: 8%; border: 10px solid #663380; padding-top:0px;" span title="Protogen x3.4 Raw Output"></center> <center><h1>Protogen x3.4 (Photorealism) Official Release</h1></center> <center><p><em>Research Model by <a href="https://instagram.com/officialvictorespinoza">darkstorm2150</a></em></p></center> </div> ## Table of contents * [General info](#general-info) * [Granular Adaptive Learning](#granular-adaptive-learning) * [Trigger Words](#trigger-words) * [Setup](#setup) * [Space](#space) * [CompVis](#compvis) * [Diffusers](#🧨-diffusers) * [Checkpoint Merging Data Reference](#checkpoint-merging-data-reference) * [License](#license) ## General info Protogen x3.4 Protogen was warm-started with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) and fine-tuned on various high quality image datasets. Version 3.4 continued training from [ProtoGen v2.2](https://huggingface.co/darkstorm2150/Protogen_v2.2_Official_Release) with added photorealism. ## Granular Adaptive Learning Granular adaptive learning is a machine learning technique that focuses on adjusting the learning process at a fine-grained level, rather than making global adjustments to the model. This approach allows the model to adapt to specific patterns or features in the data, rather than making assumptions based on general trends. Granular adaptive learning can be achieved through techniques such as active learning, which allows the model to select the data it wants to learn from, or through the use of reinforcement learning, where the model receives feedback on its performance and adapts based on that feedback. It can also be achieved through techniques such as online learning where the model adjust itself as it receives more data. Granular adaptive learning is often used in situations where the data is highly diverse or non-stationary and where the model needs to adapt quickly to changing patterns. This is often the case in dynamic environments such as robotics, financial markets, and natural language processing. ## Trigger Words modelshoot style, analog style, mdjrny-v4 style, nousr robot Trigger words are available for the hassan1.4 and f222, might have to google them :) ## Setup To run this model, download the model.ckpt or model.safetensor and install it in your "stable-diffusion-webui\models\Stable-diffusion" directory ## Space We support a [Gradio](https://github.com/gradio-app/gradio) Web UI: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/darkstorm2150/Stable-Diffusion-Protogen-webui) ### CompVis ## CKPT [Download ProtoGen x3.4.ckpt (5.98GB)](https://huggingface.co/darkstorm2150/Protogen_x3.4_Official_Release/resolve/main/ProtoGen_X3.4.ckpt) [Download ProtoGen X3.4-pruned-fp16.ckpt (1.89 GB)](https://huggingface.co/darkstorm2150/Protogen_x3.4_Official_Release/resolve/main/ProtoGen_X3.4-pruned-fp16.ckpt) ## Safetensors [Download ProtoGen x3.4.safetensors (5.98GB)](https://huggingface.co/darkstorm2150/Protogen_x3.4_Official_Release/resolve/main/ProtoGen_X3.4.safetensors) [Download ProtoGen x3.4-pruned-fp16.safetensors (1.89GB)](https://huggingface.co/darkstorm2150/Protogen_x3.4_Official_Release/resolve/main/ProtoGen_X3.4-pruned-fp16.safetensors) ### 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). ```python from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler import torch prompt = ( "modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, " "english medieval witch, black silk vale, pale skin, black silk robe, black cat, necromancy magic, medieval era, " "photorealistic painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, " "trending on CGSociety, Intricate, High Detail, Sharp focus, dramatic, photorealistic painting art by midjourney and greg rutkowski" ) model_id = "darkstorm2150/Protogen_x3.4_Official_Release" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") image = pipe(prompt, num_inference_steps=25).images[0] image.save("./result.jpg") ``` ![img](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/protogen/rswf5qk9be9a1.jpg) ## PENDING DATA FOR MERGE, RPGv2 not accounted.. ## Checkpoint Merging Data Reference <style> .myTable { border-collapse:collapse; } .myTable th { background-color:#663380; color:white; } .myTable td, .myTable th { padding:5px; border:1px solid #663380; } </style> <table class="myTable"> <tr> <th>Models</th> <th>Protogen v2.2 (Anime)</th> <th>Protogen x3.4 (Photo)</th> <th>Protogen x5.3 (Photo)</th> <th>Protogen x5.8 (Sci-fi/Anime)</th> <th>Protogen x5.9 (Dragon)</th> <th>Protogen x7.4 (Eclipse)</th> <th>Protogen x8.0 (Nova)</th> <th>Protogen x8.6 (Infinity)</th> </tr> <tr> <td>seek_art_mega v1</td> <td>52.50%</td> <td>42.76%</td> <td>42.63%</td> <td></td> <td></td> <td></td> <td>25.21%</td> <td>14.83%</td> </tr> <tr> <td>modelshoot v1</td> <td>30.00%</td> <td>24.44%</td> <td>24.37%</td> <td>2.56%</td> <td>2.05%</td> <td>3.48%</td> <td>22.91%</td> <td>13.48%</td> </tr> <tr> <td>elldreth v1</td> <td>12.64%</td> <td>10.30%</td> <td>10.23%</td> <td></td> <td></td> <td></td> <td>6.06%</td> <td>3.57%</td> </tr> <tr> <td>photoreal v2</td> <td></td> <td></td> <td>10.00%</td> <td>48.64%</td> <td>38.91%</td> <td>66.33%</td> <td>20.49%</td> <td>12.06%</td> </tr> <tr> <td>analogdiffusion v1</td> <td></td> <td>4.75%</td> <td>4.50%</td> <td></td> <td></td> <td></td> <td>1.75%</td> <td>1.03%</td> </tr> <tr> <td>openjourney v2</td> <td></td> <td>4.51%</td> <td>4.28%</td> <td></td> <td></td> <td>4.75%</td> <td>2.26%</td> <td>1.33%</td> </tr> <tr> <td>hassan1.4</td> <td>2.63%</td> <td>2.14%</td> <td>2.13%</td> <td></td> <td></td> <td></td> <td>1.26%</td> <td>0.74%</td> </tr> <tr> <td>f222</td> <td>2.23%</td> <td>1.82%</td> <td>1.81%</td> <td></td> <td></td> <td></td> <td>1.07%</td> <td>0.63%</td> </tr> <tr> <td>hasdx</td> <td></td> <td></td> <td></td> <td>20.00%</td> <td>16.00%</td> <td>4.07%</td> <td>5.01%</td> <td>2.95%</td> </tr> <tr> <td>moistmix</td> <td></td> <td></td> <td></td> <td>16.00%</td> <td>12.80%</td> <td>3.86%</td> <td>4.08%</td> <td>2.40%</td> </tr> <tr> <td>roboDiffusion v1</td> <td></td> <td>4.29%</td> <td></td> <td>12.80%</td> <td>10.24%</td> <td>3.67%</td> <td>4.41%</td> <td>2.60%</td> </tr> <tr> <td>RPG v3</td> <td></td> <td>5.00%</td> <td></td> <td></td> <td>20.00%</td> <td>4.29%</td> <td>4.29%</td> <td>2.52%</td> </tr> <tr> <td>anything&everything</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>4.51%</td> <td>0.56%</td> <td>0.33%</td> </tr> <tr> <td>dreamlikediff v1</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>5.0%</td> <td>0.63%</td> <td>0.37%</td> </tr> <tr> <td>sci-fidiff v1</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>3.10%</td> </tr> <tr> <td>synthwavepunk v2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>3.26%</td> </tr> <tr> <td>mashupv2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>11.51%</td> </tr> <tr> <td>dreamshaper 252</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>4.04%</td> </tr> <tr> <td>comicdiff v2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>4.25%</td> </tr> <tr> <td>artEros</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>15.00%</td> </tr> </table> ## License By downloading you agree to the terms of these licenses <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">CreativeML Open RAIL-M</a> <a href="https://huggingface.co/coreco/seek.art_MEGA/blob/main/LICENSE.txt">Seek Art Mega License</a>
edmagall/Rock-paper-scissors
edmagall
2023-05-10T17:07:36Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-05-10T17:07:27Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
Lauler/UL2-nemo-conversion
Lauler
2023-05-10T16:56:11Z
0
2
null
[ "region:us" ]
null
2023-05-08T10:18:06Z
## Checkpoints and conversion scripts for Nemo cpkt files to Huggingface This repo contains two checkpoints (`.ckpt` files) for UL2 models we have started pretraining with Nemo. The checkpoints are found in `nemo_checkpoints/`. The Nemo config files used to train these models can be found in `nemo_config/ul2-base-nl36`. `megatron_ul2--val_loss=2.54-step=7000-consumed_samples=14557920.0.ckpt` was trained with `megatron_legacy: False` in the config, whereas the other checkpoint was trained with `megatron_legacy: True`. Nvidia have created a conversion script that converts T5, T5v1.1 and UL2 models on Huggingface Hub to Nemo format. The script can be found [here](https://github.com/NVIDIA/NeMo/blob/main/scripts/nlp_language_modeling/hf_t5-v1_1_to_nemo.py). It is also included in this repo. We thought that adapting a T5/UL2 model trained with Nemo to a Huggingface format would simply be a manner of reversing the conversion that was performed by the script above. Our conversion script does work assuming we operate directly on the `pt` state dict weight files produced by running the above Nvidia script. I.e. it works when going directly `Huggingface -> Nemo -> Huggingface`. However, it does not work when attempting to go `Nemo -> Huggingface`. An UL2 model that was initialized with Nemo Megatron, and pretrained with Nemo, does not produce same output when converted to Huggingface format. ### Dependencies We use Nemo docker containers (tag `23.02`) via Singularity when running the code in this repo. We have included a definition file to build the container. To build the container: ```bash sudo singularity build nemo2302.sif nemo_singularity.def ``` We provide bash scripts to execute with singularity. However, to debug easier you can also run singularity in interactive mode via: ```bash singularity shell --nv nemo2302.sif ``` ### Converting Nemo checkpoints to Huggingface We have included our conversion script in this repo. It can be found in `convert_nemo_ul2_checkpoint.py`. We manually created a Huggingface config file for UL2 that to the best of our knowledge matches the settings used when we trained with Nemo (see `config_ul2_base_nl36.json`). To replicate our weights conversion, simply run: ```bash singularity exec --nv nemo2302.sif bash convert_nemo_to_hf.sh ``` The resulting Huggingface model will be saved to `ul2-base-nl36-swedish/`. We are aware that [Megatron-LM uses different ordering of QKV](https://github.com/NVIDIA/Megatron-LM/blob/42c1cf4279acea5a554500dcb552211f44cbec45/megatron/checkpointing.py#L209-L237) in the attention layers depending on the version of Megatron-LM used. We are also aware of an existing conversion script that Huggingface have created for converting Megatron-BERT to Huggingface, where they adapt the ordering of QKV in Megatron to [match the ordering used in Huggingface](https://github.com/NVIDIA/Megatron-LM/blob/42c1cf4279acea5a554500dcb552211f44cbec45/megatron/checkpointing.py#L209-L237). As such we have an optional `--fix_qkv` parameter in our conversion script that applies the same reordering of QKV as Huggingface does. See the lines that are commented out in `convert_nemo_to_hf.sh` for an example of how to use this parameter and set the `checkpoint_version`. Unfortunately, none of the above solves the issue we have with the conversion script. We have a test script that predicts both with the original Nemo model and with the converted Huggingface model. The output unfortunately isn't the same. We used the same identical tokenizer for both models. To run: ```bash singularity exec --nv nemo2302.sif python test_ul2_hf.py ``` Or explore in interactive mode with `singularity shell --nv nemo2302.sif`. ### Confirming the conversion script can reverse Nvidia's conversion script In order to confirm the conversion script is valid enough in the sense that it is able to reverse Nvidia's conversion script, we here include instructions to convert a UL2 model from Huggingface to Nemo, via Nvidia's conversion script, and then back to Huggingface via our conversion script. Instructions: 1. Run `singularity exec --nv nemo2302.sif bash convert_hf_to_nemo.sh` to convert the existing [Finnish-NLP/ul2-base-nl36-finnish](https://huggingface.co/Finnish-NLP/ul2-base-nl36-finnish) from Huggingface to Nemo format via Nvidia's conversion script. The resultning model weights will be saved to the folder `ul2-base-nl36-finnish/`. 2. To perform the reverse conversion, and to perform a check whether the re-converted weights are identical, run `python convert_finnish_ul2_model.py`. Or via singularity: `singularity exec --nv nemo2302.sif python convert_finnish_ul2_model.py`. The resuling model re-converted to Huggingface will be found in `ul2-base-nl36-finnish/hf_t5_ul2`. This conversion produces a model that is identical to the original model.
Cynthiaiii4/Text_classification_model_bbu_v4
Cynthiaiii4
2023-05-10T16:54:50Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-10T15:30:02Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: Text_classification_model_bbu_v4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Text_classification_model_bbu_v4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5753 - Accuracy: 0.7775 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.334 | 1.0 | 882 | 0.4661 | 0.775 | | 0.1585 | 2.0 | 1764 | 0.5753 | 0.7775 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Actuary/ppo-SnowballTarget
Actuary
2023-05-10T16:46:39Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-05-10T16:46:34Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Find your model_id: Actuary/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
godmethium/ppo-LunarLander-v2
godmethium
2023-05-10T16:45:00Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-10T16:44:37Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 231.64 +/- 14.05 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Yang-hugging-face-2023/sd-class-butterflies-96
Yang-hugging-face-2023
2023-05-10T16:44:56Z
30
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-05-10T16:44:15Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('Yang-hugging-face-2023/sd-class-butterflies-96') image = pipeline().images[0] image ```
LarryAIDraw/Minami_Kotobuki-V1
LarryAIDraw
2023-05-10T16:39:15Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-10T13:05:23Z
--- license: creativeml-openrail-m --- https://civitai.com/models/60998/minami-kotobuki-or-my-favorite-idol
joelewis2000/RF-Cartpole
joelewis2000
2023-05-10T16:28:48Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-10T16:28:37Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: RF-Cartpole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 489.60 +/- 31.20 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
ikerml/pajaros
ikerml
2023-05-10T16:28:32Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-05-10T16:28:23Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
sai1881/bloom-560m-finetuned-Instruct-DB-v
sai1881
2023-05-10T15:49:05Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "bloom", "text-generation", "generated_from_trainer", "license:bigscience-bloom-rail-1.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-10T03:33:13Z
--- license: bigscience-bloom-rail-1.0 tags: - generated_from_trainer model-index: - name: bloom-560m-finetuned-Instruct-DB-v results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bloom-560m-finetuned-Instruct-DB-v This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
mrm8488/mbart-large-finetuned-bible-es-en-translation
mrm8488
2023-05-10T15:44:58Z
10
1
transformers
[ "transformers", "pytorch", "safetensors", "mbart", "text2text-generation", "translation", "es", "en", "dataset:bible_para", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- tags: - translation language: - es - en datasets: - bible_para --- ### mbart-large-es-en This is mbart-large-cc25, finetuned on bible_para for Spanish to English translation. It scores BLEU **29.34**
lentan/replit
lentan
2023-05-10T15:34:14Z
22
2
transformers
[ "transformers", "pytorch", "tensorboard", "replit_lm", "text-generation", "code", "custom_code", "dataset:bigcode/the-stack-dedup", "arxiv:2211.15533", "arxiv:2205.14135", "arxiv:2108.12409", "arxiv:2302.06675", "license:cc-by-sa-4.0", "model-index", "autotrain_compatible", "region:us" ]
text-generation
2023-05-06T09:54:07Z
--- license: cc-by-sa-4.0 datasets: - bigcode/the-stack-dedup tags: - code language: - code programming_language: - Markdown - Java - JavaScript - Python - TypeScript - PHP - SQL - JSX - reStructuredText - Rust - C - CSS - Go - C++ - HTML - Vue - Ruby - Jupyter Notebook - R - Shell model-index: - name: replit-code-v1-3b results: - task: name: Code Generation type: code-generation dataset: name: "HumanEval" type: openai_humaneval metrics: - name: pass@1 type: pass@1 value: 0.219 verified: false --- # replit-code-v1-3b Developed by: Replit, Inc. [**🧑‍💻 Test it on our Demo Space! 🧑‍💻**](https://huggingface.co/spaces/replit/replit-code-v1-3b-demo) ## Model Description `replit-code-v1-3b` is a 2.7B Causal Language Model focused on **Code Completion**. The model has been trained on a subset of the [Stack Dedup v1.2 dataset](https://arxiv.org/abs/2211.15533). The training mixture includes **20 different languages**, listed here in descending order of number of tokens: <br/> `Markdown`, `Java`, `JavaScript`, `Python`, `TypeScript`, `PHP`, `SQL`, `JSX`, `reStructuredText`, `Rust`, `C`, `CSS`, `Go`, `C++`, `HTML`, `Vue`, `Ruby`, `Jupyter Notebook`, `R`, `Shell` <br/> In total, the training dataset contains 175B tokens, which were repeated over 3 epochs -- in total, `replit-code-v1-3b` has been trained on **525B** tokens (~195 tokens per parameter). The model has been trained on the [MosaicML](https://www.mosaicml.com/) platform with 256 x A100-40GB GPUs, leveraging their latest [LLM examples repo](https://github.com/mosaicml/examples/tree/release/v0.0.4/examples/llm). <br/> `replit-code-v1-3b` is powered by state-of-the-art LLM techniques, such as: [Flash Attention](https://arxiv.org/abs/2205.14135) for fast training and inference, [AliBi positional embeddings](https://arxiv.org/abs/2108.12409) to support variable context length at inference time, [LionW optimizer](https://arxiv.org/abs/2302.06675), etc. ## Intended Use Replit intends this model be used by anyone as a foundational model for application-specific fine-tuning without strict limitations on commercial use. ## Limitations The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters, and such content may be reflected in model generated text. We recommend that users exercise reasonable caution when using in production systems. Do not use for any applications that may cause harm or distress to individuals or groups. ## License The model checkpoint and vocabulary file are licensed under the Creative Commons license (CC BY-SA-4.0). Under the license, you must give credit to Replit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests that Replit endorses you or your use. ## Contact For questions and comments about the model, please post in the community section. ## How to Use First of all, you need to install the latest versions of the following dependencies: ``` einops sentencepiece torch transformers ``` You can then load the model as follows: ```python from transformers import AutoModelForCausalLM # load model model = AutoModelForCausalLM.from_pretrained('replit/replit-code-v1-3b', trust_remote_code=True) ``` To use the optimized Triton implementation of FlashAttention on GPUs with BF16 precision, first install the following dependencies: ``` flash-attn==0.2.8 triton==2.0.0.dev20221202 ``` Then, move the model to `bfloat16` and use it as follows: ```python from transformers import AutoModelForCausalLM # load model model = AutoModelForCausalLM.from_pretrained('replit/replit-code-v1-3b', trust_remote_code=True, attn_impl='triton') model.to(device='cuda:0', dtype=torch.bfloat16) # forward pass x = torch.tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]]) x = x.to(device='cuda:0') y = model(x) ``` Note that `trust_remote_code=True` is passed to the `from_pretrained` method because ReplitLM is not a class in the [Transformers](https://huggingface.co/docs/transformers/index) library. ### Tokenizer We have trained a custom SentencePiece Unigram tokenizer optimized with a vocabulary specifically for code of 32768 tokens. Note that using this requires the `sentencepiece` library to be installed. The tokenizer can be used as follows: ```python from transformers import AutoTokenizer # load tokenizer tokenizer = AutoTokenizer.from_pretrained('replit/replit-code-v1-3b', trust_remote_code=True) # single input encoding + generation x = tokenizer.encode('def hello():\n print("hello world")\n', return_tensors='pt') y = model.generate(x) # decoding, clean_up_tokenization_spaces=False to ensure syntactical correctness generated_code = tokenizer.decode(y[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(generated_code) ``` Note that: - `trust_remote_code=True` is passed to the `from_pretrained` method because ReplitLM is not a class in the [Transformers](https://huggingface.co/docs/transformers/index) library. - `clean_up_tokenization_spaces=False` is meant to avoid removing spaces in the output, because that would affect the syntactical correctness of the generated code. ### Generation You can generate code using the `transformers` library as follows: ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('replit/replit-code-v1-3b', trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained('replit/replit-code-v1-3b', trust_remote_code=True) x = tokenizer.encode('def fibonacci(n): ', return_tensors='pt') y = model.generate(x, max_length=100, do_sample=True, top_p=0.95, top_k=4, temperature=0.2, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id) # decoding, clean_up_tokenization_spaces=False to ensure syntactical correctness generated_code = tokenizer.decode(y[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(generated_code) ``` Experiment with different decoding methods and parameters to get the best results for your use case. ### Post Processing Note that as with all code generation models, post-processing of the generated code is important. In particular, the following post-processing steps are recommended: - stop generation when the EOS token is encountered - remove trailing whitespaces - set `max_tokens` to a reasonable value based on your completion use case - truncate generation to stop words such as `return`, `def`, "```", "`\n\n\n`" to avoid generating incomplete code when `max_tokens` is larger than the length of the expected generated code. ## Model Hash 5bc28ce32c6f9aec935ead7b60ea1c46
fathan/indojave-codemixed-indobert-base
fathan
2023-05-10T15:24:54Z
78
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "id", "jv", "en", "arxiv:2011.00677", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-05-02T03:47:22Z
--- tags: - generated_from_trainer model-index: - name: code-mixed-ijebertweet results: [] language: - id - jv - en pipeline_tag: fill-mask widget: - text: biasane nek arep [MASK] file bs pake software ini --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Indojave: IndoBERT-base ## About This is a pre-trained masked language model for code-mixed Indonesian-Javanese-English tweets data. This model is trained based on [IndoBERT](https://arxiv.org/pdf/2011.00677.pdf) model utilizing Hugging Face's [Transformers]((https://huggingface.co/transformers)) library. ## Pre-training Data The Twitter data is collected from January 2022 until January 2023. The tweets are collected using 8698 random keyword phrases. To make sure the retrieved data are code-mixed, we use keyword phrases that contain code-mixed Indonesian, Javanese, or English words. The following are few examples of the keyword phrases: - travelling terus - proud koncoku - great kalian semua - chattingane ilang - baru aja launching We acquire 40,788,384 raw tweets. We apply first stage pre-processing tasks such as: - remove duplicate tweets, - remove tweets with token length less than 5, - remove multiple space, - convert emoticon, - convert all tweets to lower case. After the first stage pre-processing, we obtain 17,385,773 tweets. In the second stage pre-processing, we do the following pre-processing tasks: - split the tweets into sentences, - remove sentences with token length less than 4, - convert ‘@username’ to ‘@USER’, - convert URL to HTTPURL. Finally, we have 28,121,693 sentences for the training process. This pretraining data will not be opened to public due to Twitter policy. ## Model | Model name | Base model | Size of training data | Size of validation data | |----------------------------------------|-----------------|----------------------------|-------------------------| | `indojave-codemixed-indobert-base` | IndoBERT | 2.24 GB of text | 249 MB of text | ## Evaluation Results We train the data with 3 epochs and total steps of 296K for 4 days. The following are the results obtained from the training: | train loss | eval loss | eval perplexity | |------------|------------|-----------------| | 2.2431 | 1.9968 | 7.3657 | ## How to use ### Load model and tokenizer ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("fathan/indojave-codemixed-indobert-base") model = AutoModel.from_pretrained("fathan/indojave-codemixed-indobert-base") ``` ### Masked language model ```python from transformers import pipeline pretrained_model = "fathan/indojave-codemixed-indobert-base" fill_mask = pipeline( "fill-mask", model=pretrained_model, tokenizer=pretrained_model ) ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.26.0 - Pytorch 1.12.0+cu102 - Datasets 2.9.0 - Tokenizers 0.12.1
Bennet1996/donut-base-sroie6
Bennet1996
2023-05-10T15:16:22Z
42
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-05-10T14:23:20Z
--- tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-sroie6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-sroie6 This model was trained from scratch on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
muhammadravi251001/fine-tuned-DatasetQAS-Squad-ID-with-indobert-large-p2-without-ITTL-without-freeze-LR-1e-05
muhammadravi251001
2023-05-10T15:10:41Z
21
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2023-04-06T10:14:56Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: fine-tuned-DatasetQAS-Squad-ID-with-indobert-large-p2-without-ITTL-without-freeze-LR-1e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuned-DatasetQAS-Squad-ID-with-indobert-large-p2-without-ITTL-without-freeze-LR-1e-05 This model is a fine-tuned version of [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5397 - Exact Match: 47.8725 - F1: 64.1189 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 | |:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:| | 1.8628 | 0.5 | 463 | 1.7727 | 40.7249 | 56.9043 | | 1.6706 | 1.0 | 926 | 1.6163 | 44.3912 | 61.0635 | | 1.5058 | 1.5 | 1389 | 1.5655 | 45.4339 | 61.5089 | | 1.4661 | 2.0 | 1852 | 1.5130 | 46.9055 | 63.5850 | | 1.3171 | 2.5 | 2315 | 1.5077 | 47.1914 | 63.4762 | | 1.3258 | 3.0 | 2778 | 1.4981 | 47.6034 | 64.3797 | | 1.1835 | 3.5 | 3241 | 1.5171 | 47.7043 | 64.1444 | | 1.1946 | 4.0 | 3704 | 1.5333 | 47.6539 | 64.3327 | | 1.0904 | 4.5 | 4167 | 1.5397 | 47.8725 | 64.1189 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.2.0 - Tokenizers 0.13.2
Libra2023/ppo-LunarLander-v2
Libra2023
2023-05-10T15:02:47Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-10T15:02:26Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 223.52 +/- 31.51 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
BADeid/March_7th_LoCon_300
BADeid
2023-05-10T14:50:28Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-06T11:46:03Z
--- license: creativeml-openrail-m ---
fabianmosele/roaroar
fabianmosele
2023-05-10T14:45:32Z
0
1
null
[ "license:cc-by-nc-sa-4.0", "region:us" ]
null
2023-04-26T09:46:54Z
--- license: cc-by-nc-sa-4.0 Model M: Maamaa Model Y: Yooigi Model W: Waawaa Model ⅄: Waooigi --- The official models, frames and parameters from the synthographic motion picture "reflection of a reflection of a reflection" (2023).
rethem-expeditecommerce/MiniLM-L6-4k
rethem-expeditecommerce
2023-05-10T14:44:15Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset:wikihow", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/QQP", "dataset:embedding-data/SPECTER", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/WikiAnswers", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-05-08T21:38:12Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - natural_questions - trivia_qa - embedding-data/sentence-compression - embedding-data/flickr30k-captions - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/QQP - embedding-data/SPECTER - embedding-data/PAQ_pairs - embedding-data/WikiAnswers --- # all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
mahimtalukder/whisper-small-bn
mahimtalukder
2023-05-10T14:42:47Z
77
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-08T18:18:16Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-small-bn results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-bn This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1267 - Wer: 39.9506 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1485 | 0.64 | 1000 | 0.1606 | 47.1872 | | 0.0888 | 1.27 | 2000 | 0.1267 | 39.9506 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Cynthiaiii4/Text_classification_model_blu_v1
Cynthiaiii4
2023-05-10T13:56:04Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-10T11:27:56Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: Text_classification_model_blu_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Text_classification_model_blu_v1 This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 40 - eval_batch_size: 40 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
ageng-anugrah/indobert-large-p2-finetuned-chunking
ageng-anugrah
2023-05-10T13:45:04Z
9
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "indobert", "indobenchmark", "id", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-10T09:23:16Z
--- language: id tags: - indobert - indobenchmark --- ## How to use ### Load model and tokenizer ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("ageng-anugrah/indobert-large-p2-finetuned-ner") model = AutoModelForTokenClassification.from_pretrained("ageng-anugrah/indobert-large-p2-finetuned-ner") ``` ### Extract NER Tag ```python import torch def predict(model, tokenizer, sentence): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") inputs = tokenizer(sentence.split(), is_split_into_words = True, return_offsets_mapping=True, return_tensors="pt", padding='max_length', truncation=True, max_length=512) model.to(device) # move to gpu ids = inputs["input_ids"].to(device) mask = inputs["attention_mask"].to(device) # forward pass outputs = model(ids, attention_mask=mask) logits = outputs[0] active_logits = logits.view(-1, model.num_labels) # shape (batch_size * seq_len, num_labels) flattened_predictions = torch.argmax(active_logits, axis=1) # shape (batch_size*seq_len,) - predictions at the token level tokens = tokenizer.convert_ids_to_tokens(ids.squeeze().tolist()) token_predictions = [model.config.id2label[i] for i in flattened_predictions.cpu().numpy()] wp_preds = list(zip(tokens, token_predictions)) # list of tuples. Each tuple = (wordpiece, prediction) prediction = [] for token_pred, mapping in zip(wp_preds, inputs["offset_mapping"].squeeze().tolist()): #only predictions on first word pieces are important if mapping[0] == 0 and mapping[1] != 0: prediction.append(token_pred[1]) else: continue return sentence.split(), prediction sentence = "BJ Habibie adalah Presiden Indonesia ke-3 yang lahir pada tanggl 25 Juni 1936" words, labels = predict(model, tokenizer, sentence) ```
hoang14/sentiment-analysis-generic-fintuned
hoang14
2023-05-10T13:37:25Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-10T13:21:24Z
--- tags: - generated_from_trainer model-index: - name: sentiment-analysis-generic-fintuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment-analysis-generic-fintuned This model is a fine-tuned version of [Seethal/sentiment_analysis_generic_dataset](https://huggingface.co/Seethal/sentiment_analysis_generic_dataset) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3