modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Arnold/wav2vec2-large-xlsr-turkish-demo-colab
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br>
- Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br>
- `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
- `original` version is only compatible with CPU & GPU option.<br>
- Custom resolution versions are tagged accordingly.<br>
- `vae` tagged files have a vae embedded into the model.<br>
- Descriptions are posted as-is from original model source. Not all features and/or results may be available in CoreML format.<br>
- This model was converted with `vae-encoder` for i2i.
# braBeautifulRealistic_brav5:
Source(s): [CivitAI](https://civitai.com/models/25494/brabeautiful-realistic-asians-v5)
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/8af758cc-37c2-48ca-aa41-4729db53eec0/width=450/grid-0001-1.jpeg" alt="grid-0001-1.webp" style="max-height: 100%; max-width: 100%;">
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/375e1f7f-68e4-4be6-979f-6b1de0b04993/width=450/grid-0013-1.jpeg" alt="grid-0013-1.webp" style="max-height: 100%; max-width: 100%;">
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/17244b8b-3ed3-4174-806d-9914be50f0aa/width=450/grid-0014-1.jpeg" alt="grid-0014-1.webp" style="max-height: 100%; max-width: 100%;">
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ca9fcb9f-d272-48ca-a306-1e80b2c14451/width=450/03216-8k 1 27 best quality masterpiece.jpeg" alt="03216-8k 1 27 best quality masterpiece.jpeg" style="max-height: 100%; max-width: 100%;">
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/47e5c304-da3f-423f-9c3c-7768d28db3a8/width=450/03203-8k best quality masterpiece ultra highres.jpeg" alt="03203-8k best quality masterpiece ultra highres.jpeg" style="max-height: 100%; max-width: 100%;">
If you would like to see more and better versions of Bras please consider supporting (tryna recoup gpu cost lol)
<a target="_blank" rel="ugc" href="https://ko-fi.com/bankaiplease">https://ko-fi.com/bankaiplease</a>
Trained on countless pictures of Beautiful Asian women. (some nsfw images included)
This is the result of around 3 Months of training and failing and training and merging. After a lot of failures and mild successes I have finally come to a point where it is good enough to release to the public.
Trained mostly on BLIP and WD, so suggest to use similar style prompts.
This model is hosted online and can be run on the following website.
<a target="_blank" rel="ugc" href="https://randomseed.co/model/18">https://randomseed.co/model/18</a>
Example:
Pretty asian in a black swimsuit leaning against a wall
Korean with long black hair wearing a red shirt and a red polka dot blouse with long black hair
a woman with long hair and a black top is posing for a picture with her hand on her chin, 1girl, solo, long_hair, looking_at_viewer, brown_hair, simple_background, black_hair, jewelry, earrings, necklace, lips, black_shirt, ring, realistic
Negative Prompts:
use negative TI embeddings as well as (lowres,worst quality) etc
Feel Free to use this for merges or mixes, just credit.
Feel Free to add me on twitter if you have questions, feedback, requests for improvements or want to ask for commisions or whatever:
<a target="_blank" rel="ugc" href="https://twitter.com/PleaseBanKai">https://twitter.com/PleaseBanKai</a>
|
Arpita/opus-mt-en-ro-finetuned-syn-to-react
|
[
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# ubuqg63bl2 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "ubuqg63bl2"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/ubuqg63bl2)
Credits: [View credits](https://civitai.com/?query=ubuqg63bl2)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "ubuqg63bl2",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
ArshdeepSekhon050/DialoGPT-medium-RickAndMorty
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 495.80 +/- 12.60
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Aruden/DialoGPT-medium-harrypotterall
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | 2023-05-11T06:49:57Z |
Use:
tokenizer = BertTokenizerFast.from_pretrained('ozooora/kzlbert-3poi')
model = AutoModelForSequenceClassification.from_pretrained('ozooora/kzlbert-3poi', return_dict=True)
@torch.no_grad()
def predict(text):
inputs = tokenizer(text, max_length=419, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted_probs = torch.nn.functional.softmax(outputs.logits, dim=1)
predicted = torch.argmax(predicted_probs, dim=1).item()
return predicted, predicted_probs[0].tolist()
|
Ateeb/EmotionDetector
|
[
"pytorch",
"funnel",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"FunnelForSequenceClassification"
],
"model_type": "funnel",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 32 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.67 +/- 11.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Atlasky/turkish-negator-nn
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-05-11T07:22:33Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: crazy_music_model_512cl_10epocs_16batch_test1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# crazy_music_model_512cl_10epocs_16batch_test1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2003 | 7.39 | 5000 | 0.1946 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Augustab/distilbert-base-uncased-finetuned-cola
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: creativeml-openrail-m
language:
- en
tags:
- LLM
- tensorRT
- ChatGLM
---
## Model Card for lyraChatGLM
lyraChatGLM is currently the **fastest ChatGLM-6B** available. To the best of our knowledge, it is the **first accelerated version of ChatGLM-6B**.
The inference speed of lyraChatGLM has achieved **10x** acceleration upon the ealry original version. We are still working hard to further improve the performance.
Among its main features are:
- weights: original ChatGLM-6B weights released by THUDM.
- device: lyraChatGLM is mainly based on TensorRT compiled for SM=80 (A100, for example).
- batch_size: compiled with dynamic batch size, max batch_size = 8
## Speed
### test environment
- device: Nvidia A100 40G
- batch size: 8
**Since early chatGLM version didn't suport batch inference, `original` in below table was measured on batch_size=1**
**According to [this discussion](https://huggingface.co/TMElyralab/lyraChatGLM/discussions/6), this bug has been fixed and the speed on batch_size=8 reachs up to 137 tokens/s. We will evaluate and update the latest performance.**
|version|speed|
|:-:|:-:|
|original|30 tokens/s|
|lyraChatGLM|310 tokens/s|
## Model Sources
- **Repository:** [https://huggingface.co/THUDM/chatglm-6b]
## Try Demo in 2 fast steps
``` bash
#step 1
git clone https://huggingface.co/TMElyralab/lyraChatGLM
cd lyraChatGLM
#step 2
docker run --gpus=1 --rm --net=host -v ${PWD}:/workdir yibolu96/lyra-chatglm-env:0.0.1 python3 /workdir/demo.py
```
## Uses
```python
from transformers import AutoTokenizer
from lyraChatGLM import GLM6B, FasterChatGLM
import os
current_workdir = os.path.dirname(__file__)
MAX_OUT_LEN = 100
chatglm6b_dir = os.path.join(current_workdir, "models")
tokenizer = AutoTokenizer.from_pretrained(chatglm6b_dir, trust_remote_code=True)
input_str = ["为什么我们需要对深度学习模型加速?", ]
inputs = tokenizer(input_str, return_tensors="pt", padding=True)
input_ids = inputs.input_ids.to('cuda:0')
plan_path = os.path.join(current_workdir, "models/glm6b-bs8.ftm")
# kernel for chat model.
kernel = GLM6B(plan_path=plan_path,
batch_size=1,
num_beams=1,
use_cache=True,
num_heads=32,
emb_size_per_heads=128,
decoder_layers=28,
vocab_size=150528,
max_seq_len=MAX_OUT_LEN)
chat = FasterChatGLM(model_dir=chatglm6b_dir, kernel=kernel).half().cuda()
# generate
sample_output = chat.generate(inputs=input_ids, max_length=MAX_OUT_LEN)
# de-tokenize model output to text
res = tokenizer.decode(sample_output[0], skip_special_tokens=True)
print(res)
```
## Demo output
### input
为什么我们需要对深度学习模型加速? 。
### output
为什么我们需要对深度学习模型加速? 深度学习模型的训练需要大量计算资源,特别是在训练模型时,需要大量的内存、GPU(图形处理器)和其他计算资源。因此,训练深度学习模型需要一定的时间,并且如果模型不能快速训练,则可能会导致训练进度缓慢或无法训练。
以下是一些原因我们需要对深度学习模型加速:
1. 训练深度神经网络需要大量的计算资源,特别是在训练深度神经网络时,需要更多的计算资源,因此需要更快的训练速度。
### TODO:
We plan to implement a FasterTransformer version to publish a much faster release. Stay tuned!
## Citation
``` bibtex
@Misc{lyraChatGLM2023,
author = {Kangjian Wu, Zhengtao Wang, Yibo Lu, Bin Wu},
title = {lyraChatGLM: Accelerating ChatGLM by 10x+},
howpublished = {\url{https://huggingface.co/TMElyralab/lyraChatGLM}},
year = {2023}
}
```
## Report bug
- start a discussion to report any bugs!--> https://huggingface.co/TMElyralab/lyraChatGLM/discussions
- report bug with a `[bug]` mark in the title.
|
Augustvember/WokkaBot2
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language:
- en
- hi
- as
- bn
- gu
- kn
- ml
- mr
- or
- pa
- ta
- te
thumbnail: "https://www.kooapp.com/_next/static/media/logoKuSolidOutline.1f4fa971.svg"
license: mit
pipeline_tag: fill-mask
widget:
- text: "I like multilingual [MASK]."
example_title: "English"
- text: "मुझे बहुभाषी वर्गीकरण [MASK] है |"
example_title: "Hindi"
- text: "বহুভাষিক শ্ৰেণীবিভাজন [MASK] ভাল লাগে।"
example_title: "Assamese"
- text: "আমি বহুভাষিক শ্রেণীবিভাগ [MASK] করি।"
example_title: "Bengali"
- text: "મને બહુભાષી વર્ગીકરણ [MASK] છે."
example_title: "Gujarati"
- text: "ನಾನು [MASK] ವರ್ಗೀಕರಣವನ್ನು ಇಷ್ಟಪಡುತ್ತೇನೆ."
example_title: "Kannada"
- text: "എനിക്ക് ബഹുഭാഷാ [MASK] ഇഷ്ടമാണ്."
example_title: "Malayalam"
- text: "मला बहुभाषिक वर्गीकरण [MASK]."
example_title: "Marathi"
- text: "ମୁଁ ବହୁଭାଷୀ ବର୍ଗୀକରଣ [MASK] କରେ |"
example_title: "Oriya"
- text: "ਮੈਨੂੰ ਬਹੁ-ਭਾਸ਼ਾਈ ਵਰਗੀਕਰਨ [MASK] ਹੈ।"
example_title: "Punjabi"
- text: "நான் [MASK] வகைப்படுத்தலை விரும்புகிறேன்."
example_title: "Tamil"
- text: "నాకు బహుభాషా వర్గీకరణ [MASK] ఇష్టం."
example_title: "Telugu"
---
# Model Card for KooBERT
KooBERT is a masked language model trained on data from the multilingual micro-blogging social media platform [Koo India](https://www.kooapp.com/). <br>
This model was built in collaboration with Koo India and AI4Bharat.
## Model Details
### Model Description
On Koo platform, we have microblogs (Koos) which are limited to 400 characters and are available in multiple languages.
The model was trained on a dataset that contains multilingual koos from Jan 2020 to Nov 2022 on masked language modeling task.
- **Model type:** BERT based pretrained model
- **Language(s) (NLP):** assamese, bengali, english, gujarati, hindi, kannada, malayalam, marathi, oriya, punjabi, tamil, telugu
- **License:** KooBERT released under the MIT License.
## Uses
This model can be used to perform downstream tasks like content classification, toxicity detection, etc. for supported Indic languages <br>
It can also be used with sentence-transformers library for the creation of multilingual vector embeddings for other uses.
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Bias, Risks, and Limitations
As with any machine learning model, KooBERT may have limitations and biases. It is important to keep in mind that this model was trained on Koo Social Media data and may not generalize well to other domains. It is also possible that the model may have biases in the data it was trained on, which may affect its predictions. It is recommended to evaluate the model on your specific use case and data to ensure it is appropriate for your needs.
## How to Get Started with the Model
Use the code below to get started with the model for general finetuning tasks. Please note this is just a sample for finetuning.
```
import torch
from datasets import load_dataset, load_metric
import evaluate
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TrainingArguments, Trainer
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("Koodsml/KooBERT")
model = AutoModelForSequenceClassification.from_pretrained("Koodsml/KooBERT", num_labels=2)
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
def tokenize_function(examples):
return tokenizer(examples["text"], padding='max_length', truncation=True, max_length=128)
# Load the CoLA dataset
dataset = load_dataset("glue","cola")
dataset = dataset.rename_column('sentence', 'text')
datset_tok = dataset.map(tokenize_function, batched=True)
# Set the device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
# Define the training arguments
training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch")
# Define the trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=datset_tok['train'],
eval_dataset=datset_tok['validation'],
compute_metrics=compute_metrics,
)
# Fine-tune on the CoLA dataset
trainer.train()
# Evaluate on the CoLA dataset
eval_results = trainer.evaluate(eval_dataset=cola_dataset['validation'])
print(eval_results)
```
We can also use KooBERT with the sentence-transformers library to create multilingual vector embeddings. Here is an example:
```
from sentence_transformers import SentenceTransformer
# Load the KooBERT model
koo_model = SentenceTransformer('Koodsml/KooBERT', device="cuda")
# Define the text
text = "यह हमेशा से हमारी सोच है"
# Get the embedding
embedding = koo_model.encode(text)
print(embedding)
```
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Following is the distribution of tokens over languages:
| Language | Koos | Avg Tokens per Koo | Total Tokens |
|------------------|-------------|---------------------|--------------|
| assamese | 562,050 | 16.4414198 | 9,240,900 |
| bengali | 2,110,380 | 12.08918773 | 25,512,780 |
| english | 17,889,600 | 10.93732057 | 195,664,290 |
| gujarati | 1,825,770 | 14.33965395 | 26,180,910 |
| hindi | 35,948,760 | 16.2337502 | 583,583,190 |
| kannada | 2,653,860 | 12.04577107 | 31,967,790 |
| malayalam | 71,370 | 10.32744851 | 737,070 |
| marathi | 1,894,080 | 14.81544602 | 28,061,640 |
| oriya | 87,930 | 14.1941317 | 1,248,090 |
| punjabi | 940,260 | 18.59961075 | 17,488,470 |
| tamil | 1,687,710 | 12.12147822 | 20,457,540 |
| telugu | 2,471,940 | 10.55735576 | 26,097,150 |
Total Koos = 68,143,710<br>
Total Tokens = 966,239,820 (based on a close approximation)
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing
Personal Identifiable Information (PII) was removed from data before training on microblogs.
Temperature Sampling to upsample low resource languages. We used a temperature of value of 0.7 (Refer Sec 3.1 https://arxiv.org/pdf/1901.07291.pdf)
#### Training Hyperparameters
**Training regime**
+ Training steps - 1M steps
+ Warm - 10k steps
+ Learning Rate - 5e-4
+ Scheduler - Linear Decay
+ Optimizer - Adam
+ Batch Size of 4096 sequences
+ Precision - fp32
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
The model has not been benchmarked yet. We shall be releasing the benchmark data in a future update.
## Model Examination
<!-- Relevant interpretability work for the model goes here -->
### Model Architecture and Objective
KooBERT is pretrained with BERT Architecture on Masked Language Modeling with a vocabulary size of 128k and max sequence length of 128 tokens.
### Compute Infrastructure
KooBERT was trained on TPU v3 with 128 cores which took over 5 days.
## Contributors
Mitesh Khapra ([miteshk@cse.iitm.ac.in](mailto:anoop.kunchukuttan@gmail.com))- IITM AI4Bharat<br>
Sumanth Doddapaneni ([dsumanth17@gmail.com](mailto:anoop.kunchukuttan@gmail.com))- IITM AI4Bharat<br>
Smiral Rashinkar ([smiral.rashinkar@kooapp.com](mailto:anoop.kunchukuttan@gmail.com))- Koo India
|
Axcel/DialoGPT-small-rick
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14 | null |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: fwtn/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Aybars/XLM_Turkish
|
[
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: zeeshan-sardar/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Ayham/albert_distilgpt2_summarization_cnn_dailymail
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Neronuser/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Ayham/bert_distilgpt2_summarization_cnn_dailymail
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Neronuser/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Ayham/bert_gpt2_summarization_cnndm
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | 2023-05-11T08:30:25Z |
---
license: apache-2.0
language:
- zh
---
aishell3数据介绍:
希尔贝壳中文普通话语音数据库AISHELL-3的语音时长为85小时88035句,可做为多说话人合成系统。录制过程在安静室内环境中, 使用高保真麦克风(44.1kHz,16bit)。
218名来自中国不同口音区域的发言人参与录制。专业语音校对人员进行拼音和韵律标注,并通过严格质量检验,此数据库音字确率在98%以上。
vits模型介绍:
这是一个基于vits_chinese和aishell3 175人中文训练的预训练模型,可以直接用于微调语音克隆,大大缩短微调训练的时间。
该模型使用tesla T4 16G训练了大概2周,500K步,单人语音数据微调1-3小时,即可达到非常逼真的效果,是MOS值最接近真实值的一个模型。
该模型包含了两个模型文件,一个是D_AISHELL.pth,另外一个是G_AISHELL.pth,共同构成了预训练模型。
微调:
需要将这个两个模型文件放到utils.save_checkpoint目录下:
utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "G_{}.pth".format(global_step)))
utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "D_{}.pth".format(global_step)))
推理:
使用通过个人语音数据微调后的G_AISHELL.pth即可。
utils.load_checkpoint("G_pretrained.pth", net_g, None)
|
Ayham/bertgpt2_cnn
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# troi4bwiyt4 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "troi4bwiyt4"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/troi4bwiyt4)
Credits: [View credits](https://civitai.com/?query=troi4bwiyt4)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "troi4bwiyt4",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
Ayham/distilbert_gpt2_summarization_cnndm
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 279.35 +/- 19.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
import gymnasium
from huggingface_sb3 import load_from_hub, package_to_hub
from huggingface_hub import notebook_login # To log to our Hugging Face account to be able to upload models to the Hub.
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.env_util import make_vec_env
import gymnasium as gym
# First, we create our environment called LunarLander-v2
env = gym.make("LunarLander-v2")
# Then we reset this environment
observation, info = env.reset()
for _ in range(20):
# Take a random action
action = env.action_space.sample()
print("Action taken:", action)
# Do this action in the environment and get
# next_state, reward, terminated, truncated and info
observation, reward, terminated, truncated, info = env.step(action)
# If the game is terminated (in our case we land, crashed) or truncated (timeout)
if terminated or truncated:
# Reset the environment
print("Environment is reset")
observation, info = env.reset()
env.close()
# Create the environment
env = make_vec_env('LunarLander-v2', n_envs=16)
model = PPO(
policy = 'MlpPolicy',
env = env,
n_steps = 1024,
batch_size = 64,
n_epochs = 4,
gamma = 0.999,
gae_lambda = 0.98,
ent_coef = 0.01,
verbose=1)
# TODO: Train it for 1,000,000 timesteps
model.learn(total_timesteps=1000000)
# TODO: Specify file name for model and save the model to file
model_name = "ppo-LunarLander-v2"
model.save(model_name)
# TODO: Evaluate the agent
# Create a new environment for evaluation
eval_env = gym.make("LunarLander-v2")
# Evaluate the model with 10 evaluation episodes and deterministic=True
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
# Print the results
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
...
```
|
Ayham/distilbert_roberta_summarization_cnn_dailymail
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.53 +/- 27.73
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ayham/xlmroberta_gpt2_summarization_xsum
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# nakcnx/setfit-paraphrase-multilingual-MiniLM-bad_topic
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("nakcnx/setfit-paraphrase-multilingual-MiniLM-bad_topic")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Ayjayo/DialoGPT-medium-AyjayoAI
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | 2023-05-11T09:16:53Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 10 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 10,
"warmup_steps": 1,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
AyushPJ/test-squad-trained-finetuned-squad
|
[
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
license: mit
---
# Sentence BERT fine-tuned commodities
This model is part of a collection of fine-tuned Sentence BERT models that were generated with the data of the "TRENCHANT: TRENd PrediCtion on Heterogeneous informAtion NeTworks" article.
Source code and networks are available at the following GitHub repo: https://github.com/paulorvdc/TRENCHANT
## how to cite
```
@article{doCarmo_ReisFilho_Marcacini_2023,
title={TRENCHANT: TRENd PrediCtion on Heterogeneous informAtion NeTworks},
volume={13},
url={https://sol.sbc.org.br/journals/index.php/jidm/article/view/2546},
DOI={10.5753/jidm.2022.2546},
number={6},
journal={Journal of Information and Data Management},
author={do Carmo, P. and Reis Filho, I. J. and Marcacini, R.},
year={2023},
month={Jan.}
}
```
## how to use
```
from sentence_transformers import SentenceTransformer, LoggingHandler
import numpy as np
import logging
# load model
np.set_printoptions(threshold=100)
logging.basicConfig(format='%(asctime)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
level=logging.INFO,
handlers=[LoggingHandler()])
model = SentenceTransformer('paulorvdc/sentencebert-fine-tuned-months-corn')
finetuned_embeddings = list(model.encode(['Livestock Producers in Brazil Fear Diversion of Corn to Export and Ethanol Production', 'Brazilian Farmers Undecided about Safrinha Corn Acreage']))
```
|
BOON/electra-xlnet
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0-MLP
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 12.86 +/- 16.21
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BSC-LT/RoBERTalex
|
[
"pytorch",
"roberta",
"fill-mask",
"es",
"dataset:legal_ES",
"dataset:temu_legal",
"arxiv:2110.12201",
"transformers",
"legal",
"spanish",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 24 | null |
---
license: mit
---
# Sentence BERT fine-tuned commodities
This model is part of a collection of fine-tuned Sentence BERT models that were generated with the data of the "TRENCHANT: TRENd PrediCtion on Heterogeneous informAtion NeTworks" article.
Source code and networks are available at the following GitHub repo: https://github.com/paulorvdc/TRENCHANT
## how to cite
```
@article{doCarmo_ReisFilho_Marcacini_2023,
title={TRENCHANT: TRENd PrediCtion on Heterogeneous informAtion NeTworks},
volume={13},
url={https://sol.sbc.org.br/journals/index.php/jidm/article/view/2546},
DOI={10.5753/jidm.2022.2546},
number={6},
journal={Journal of Information and Data Management},
author={do Carmo, P. and Reis Filho, I. J. and Marcacini, R.},
year={2023},
month={Jan.}
}
```
## how to use
```
from sentence_transformers import SentenceTransformer, LoggingHandler
import numpy as np
import logging
# load model
np.set_printoptions(threshold=100)
logging.basicConfig(format='%(asctime)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
level=logging.INFO,
handlers=[LoggingHandler()])
model = SentenceTransformer('paulorvdc/sentencebert-fine-tuned-weeks-corn')
finetuned_embeddings = list(model.encode(['Livestock Producers in Brazil Fear Diversion of Corn to Export and Ethanol Production', 'Brazilian Farmers Undecided about Safrinha Corn Acreage']))
```
|
BSC-LT/roberta-base-biomedical-clinical-es
|
[
"pytorch",
"roberta",
"fill-mask",
"es",
"arxiv:2109.03570",
"arxiv:2109.07765",
"transformers",
"biomedical",
"clinical",
"spanish",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 27 | null |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 180 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 180,
"warmup_steps": 18,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
BSC-LT/roberta-base-bne-sqac
|
[
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:BSC-TeMU/SQAC",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"qa",
"question answering",
"license:apache-2.0",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: JoBuettner/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BW/TEST
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14 | null |
---
language:
- nl
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Dutch - RTL
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: dutch
split: test
args: dutch
metrics:
- name: Wer
type: wer
value: 24.924531584171586
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Dutch - RTL
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1895
- Wer: 24.9245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1209 | 0.78 | 1000 | 0.1895 | 24.9245 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Badr/model1
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: mit
language:
- en
metrics:
- accuracy
library_name: sentence-transformers
tags:
- pqai
- patents
- prior-art-search
---
|
Bagus/wav2vec2-xlsr-japanese-speech-emotion-recognition
|
[
"pytorch",
"wav2vec2",
"audio-classification",
"ja",
"dataset:jtes",
"transformers",
"audio",
"speech",
"speech-emotion-recognition",
"has_space"
] |
audio-classification
|
{
"architectures": [
"HubertForSequenceClassification"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26 | null |
---
license: mit
language: ja
library_name: transformers
tags:
- pytorch
- deberta
- deberta-v2
- commonsenseqa
- commonsense_qa
- commonsense-qa
- CommonsenseQA
datasets:
- wikipedia
- cc100
- oscar
metrics:
- accuracy
---
# このモデルはdeberta-v2-tiny-japaneseをファインチューニングしてCommonsenseQA(選択式の質問)に用いれるようにしたものです。
このモデルはdeberta-v2-tiny-japaneseをyahoo japan/JGLUEのJCommonsenseQA( https://github.com/yahoojapan/JGLUE ) を用いてファインチューニングしたものです。
# This model is fine-tuned model for CommonsenseQA which is based on deberta-v2-tiny-japanese
This model is fine-tuned by using JGLUE/JCommonsenseQA dataset.
You could use this model for CommonsenseQA tasks.
# How to use 使い方
transformersおよびpytorchをインストールしてください。
以下のコードを実行することで、CommonsenseQAタスクを解かせることができます。 please execute this code.
```python
from transformers import AutoTokenizer, AutoModelForMultipleChoice
import torch
import numpy as np
# modelのロード
tokenizer = AutoTokenizer.from_pretrained('Mizuiro-sakura/deberta-v2-japanese-tiny-finetuned-commonsenseqa')
model = AutoModelForMultipleChoice.from_pretrained('Mizuiro-sakura/deberta-v2-japanese-tiny-finetuned-commonsenseqa')
# 質問と選択肢の代入
question = '電子機器で使用される最も主要な電子回路基板の事をなんと言う?'
choice1 = '掲示板'
choice2 = 'パソコン'
choice3 = 'マザーボード'
choice4 = 'ハードディスク'
choice5 = 'まな板'
# トークン化(エンコーディング・形態素解析)する
token = tokenizer([question,question,question,question,question],[choice1,choice2,choice3,choice4,choice5],return_tensors='pt',padding=True)
leng=len(token['input_ids'][0])
# modelに入力するための下準備
X1 = np.empty(shape=(1, 5, leng))
X2 = np.empty(shape=(1, 5, leng))
X1[0, :, :] = token['input_ids']
X2[0, :, :] = token['attention_mask']
# modelにトークンを入力する
results = model(torch.tensor(X1).to(torch.int64),torch.tensor(X2).to(torch.int64))
# 最も高い値のインデックスを取得する
max_result=torch.argmax(results.logits)
print(max_result)
```
# モデルの精度 accuracy of model
51.1
(参考 BERT : 72.0, XLM RoBERTa base : 68.7, LUKE : 80.0)
# deberta-v2-base-japaneseとは?
日本語Wikipedeia(3.2GB)および、cc100(85GB)、oscar(54GB)を用いて訓練されたモデルです。
京都大学黒橋研究室が公表されました。
# Model description
This is a Japanese DeBERTa V2 base model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR.
# Acknowledgments 謝辞
モデルを公開してくださった京都大学黒橋研究室には感謝いたします。
I would like to thank Kurohashi Lab at Kyoto University.
|
Barytes/hellohf
|
[
"tf",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | 2023-05-11T10:50:43Z |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# d8wkbf38wg API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "d8wkbf38wg"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/d8wkbf38wg)
Credits: [View credits](https://civitai.com/?query=d8wkbf38wg)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "d8wkbf38wg",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
Batsy24/DialoGPT-small-Twilight_EdBot
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | 2023-05-11T11:00:39Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# ilhkn/sentence_classifier2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("ilhkn/sentence_classifier2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
BatuhanYilmaz/bert-finetuned-mrpc
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xlsum-fi
metrics:
- rouge
model-index:
- name: my_awesome_mt5-sum-fi
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum-fi
type: xlsum-fi
config: finnish
split: test
args: finnish
metrics:
- name: Rouge1
type: rouge
value: 0.0144
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mt5-sum-fi
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the xlsum-fi dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.0144
- Rouge2: 0.0031
- Rougel: 0.0138
- Rougelsum: 0.0137
- Gen Len: 6.4848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 91 | nan | 0.0144 | 0.0031 | 0.0138 | 0.0137 | 6.4848 |
| No log | 2.0 | 182 | nan | 0.0144 | 0.0031 | 0.0138 | 0.0137 | 6.4848 |
| No log | 3.0 | 273 | nan | 0.0144 | 0.0031 | 0.0138 | 0.0137 | 6.4848 |
| No log | 4.0 | 364 | nan | 0.0144 | 0.0031 | 0.0138 | 0.0137 | 6.4848 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BatuhanYilmaz/bert-finetuned-nerxD
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BatuhanYilmaz/distilbert-base-uncased-finetuned-squad-d5716d28
|
[
"pytorch",
"distilbert",
"fill-mask",
"en",
"dataset:squad",
"arxiv:1910.01108",
"transformers",
"question-answering",
"license:apache-2.0",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 18 | null |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: dawoz/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BatuhanYilmaz/dummy
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
language: en
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
ELECTRA is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN. At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the SQuAD 2.0 dataset.
For a detailed description and experimental results, please refer to our paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.
This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. GLUE), QA tasks (e.g., SQuAD), and sequence tagging tasks (e.g., text chunking).
How to use the discriminator in transformers
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("google/electra-base-discriminator")
tokenizer = ElectraTokenizerFast.from_pretrained("google/electra-base-discriminator")
sentence = "The quick brown fox jumps over the lazy dog"
fake_sentence = "The quick brown fox fake over the lazy dog"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions.tolist()]
|
BatuhanYilmaz/marian-finetuned-kde4-en-to-fr
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: other
inference: false
---
## Dromedary-65B-LoRA HF
These files are the result of merging the [delta weights of IBM's Dromedary 65B LoRA](https://huggingface.co/zhiqings/dromedary-65b-lora-delta-v0) with the original Llama 65B model.
## Repositories available
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/dromedary-65B-lora-GPTQ)
* [4bit and 5bit GGML models for CPU inference in llama.cpp](https://huggingface.co/TheBloke/dromedary-65B-lora-GGML)
* [float16 unquantised model for GPU](https://huggingface.co/TheBloke/dromedary-65B-lora-HF)
# Original Dromedary Model Card
See https://github.com/IBM/Dromedary#model-weights for instructions.
## Model details
<div align="center">
<img src="https://raw.githubusercontent.com/IBM/Dromedary/main/assets/images/dromedary_logo.svg" alt="Dromedary Logo"/>
</div>
**Model type:**
Dromedary is an open-source self-aligned language model trained with minimal human supervision.
The base language model is LLaMA-65b, based on the transformer architecture.
**Model date:**
Dromedary was trained between April 2023 and May 2023, but its knowledge only goes up until Sept-2021.
**Organizations developing the model:**
The Dromedary team as a joint effort between CMU and IBM.
**Paper or resources for more information:**
https://mitibmdemos.draco.res.ibm.com/dromedary
**License:**
LLaMA's Non-commercial bespoke license
**Where to send questions or comments about the model:**
https://github.com/IBM/Dromedary/issues
## Intended use
**Primary intended uses:**
The primary use of Dromedary is research on the alignment of large language models.
**Primary intended users:**
The primary intended users of the model are researchers in artificial intelligence.
## Delta weights
We use the following configuration for the LoRA weights:
```
--lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
--lora_r=16 \
```
## Training dataset
Fewer than 300 lines of human annotations (including < 200 seed prompts, 16 generic principles, and 5 exemplars for in-context learning),
## Evaluation dataset
We evaluate Dromedary on TruthfulQA and HHH Eval, as well as Vicuna benchmark questions.
|
BatuhanYilmaz/mlm-finetuned-imdb
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 32.90 +/- 12.58
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Baybars/debateGPT
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# eviajfg38iqff API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "eviajfg38iqff"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/eviajfg38iqff)
Credits: [View credits](https://civitai.com/?query=eviajfg38iqff)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "eviajfg38iqff",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
Baybars/wav2vec2-xls-r-1b-turkish
|
[
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13 | null |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: JoBuettner/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Baybars/wav2vec2-xls-r-300m-cv8-turkish
|
[
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('tashinam/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Bee-Garbs/DialoGPT-cartman-small
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-05-11T11:22:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Text_classification_model_bbu_12500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text_classification_model_bbu_12500
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9447
- Accuracy: 0.795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.348 | 1.0 | 882 | 0.4511 | 0.7925 |
| 0.1714 | 2.0 | 1764 | 0.5316 | 0.7925 |
| 0.0852 | 3.0 | 2646 | 0.8147 | 0.79 |
| 0.0529 | 4.0 | 3528 | 0.9447 | 0.795 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Beelow/model
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-05-11T11:27:48Z |
The ideal tool for users to move their numerous email records from Outlook PST into MBOX file format is the DataVare PST to MBOX converter. Users can access email messages stored in Outlook PST from Apple mail accounts, Thunderbird, and a number of MBOX-based email clients by using this practical solution. Several PST files and email attachments can be converted into MBOX files with this tool. Before converting the PST file into MBOX format, it enables the user to see PST email data. Users can get a sense of how to handle email data in MBOX format by viewing this glimpse of PST email data. The user can use this application to convert numerous PST files into MBOX format. The folder structure is additionally handled by this utility, enabling users to save their converted PST files in MBOX format in the folder hierarchy of their choice. It works well with MS Outlook versions 2003, 2006, 2010, 2013, 2016, and 2019, among others, and it can be used with Windows 7/8/10, XP, and Vista. Versions in ANSI and Unicode are both supported. It is simple to apply to all PST files produced by various Microsoft Outlook versions. You can apply to buy the full version of the software, which is reasonably priced, if you want to convert PST to MBOX completely.
Read more :- https://www.datavare.com/software/pst-to-mbox-converter-expert.html
|
BenWitter/DialoGPT-small-Tyrion
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
tags:
- generated_from_trainer
model-index:
- name: bert-question-answering-dutch
results: []
dataset:
- type: yhavinga/squad_v2_dutch
- name: Dutch translation of SQUAD v2 dataset by yhavinga
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-question-answering-dutch
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1616 | 1.0 | 16288 | 0.9373 |
| 0.807 | 2.0 | 32576 | 0.9496 |
| 0.579 | 3.0 | 48864 | 1.1493 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Bharathdamu/wav2vec2-large-xls-r-300m-hindi
|
[
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
license: apache-2.0
---
This model is converted from `decapoda-research/llama-7b-hf` to `ziqingyang/chinese-alpaca-plus-lora-7b` and quantized for use with `ggerganov/llama.cpp`.
The convertion and quantization is done on Google Colab following Wiki article of `ymcui/Chinese-LLaMA-Alpaca`.
The quantization methods have been updated for llama.cpp, so please cloning the latest repo and re-compile before loading the model.
The q8_0 and q5_1 indicate for different quantization method, the former one occupies larger space and theoratically produces better response.
Example of q8_0 model on llama.cpp (acceptable responses but very short):
```
> ./main -m ./models/chinese-Alpaca-7b-plus-ggml-q8_0.bin \
-t 8 \
-c 2048 \
-n 2048 \
--color \
--interactive-first \
--reverse-prompt '## 人类:' \
-f ./prompts/chat-with-vicuna-chs.txt
main: build = 0 (unknown)
main: seed = 1683883289
llama.cpp: loading model from ./models/chinese-Alpaca-7b-plus-ggml-q8_0.bin
llama_model_load_internal: format = ggjt v1 (latest)
llama_model_load_internal: n_vocab = 49954
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 7 (mostly Q8_0)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 68.20 KB
llama_model_load_internal: mem required = 9180.13 MB (+ 1026.00 MB per state)
llama_init_from_file: kv self size = 1024.00 MB
system_info: n_threads = 8 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
main: interactive mode on.
Reverse prompt: '## 人类:'
sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 2048, n_batch = 512, n_predict = 2048, n_keep = 0
== Running in interactive mode. ==
- Press Ctrl+C to interject at any time.
- Press Return to return control to LLaMa.
- To return control without starting a new line, end your input with '/'.
- If you want to submit another line, end your input with '\'.
这是一段人类和人工智能助理的对话。人工智能助理会对人类的问题给出详细、准确、礼貌的回答。
## 人类:你好,助理。
## 助理:你好,有什么需要帮助的吗?
## 人类:天空为什么是蓝色的?
助理:天空之所以呈现蓝色,是因为太阳光在大气层中发生散射作用。
## 人类:请再详细地解释一下。
助理:太阳光穿过大气层时,一部分被散射到不同波长的角度上,其中蓝色光的波长比其他颜色短。
## 人类:你真是惜字如金啊。
助理:谢谢你的夸奖,但我只是根据我学到的知识和算法来回答你的问题。
## 人类:
llama_print_timings: load time = 9418.31 ms
llama_print_timings: sample time = 107.95 ms / 73 runs ( 1.48 ms per run)
llama_print_timings: prompt eval time = 8645.76 ms / 85 tokens ( 101.71 ms per token)
llama_print_timings: eval time = 16303.43 ms / 73 runs ( 223.33 ms per run)
llama_print_timings: total time = 987546.29 ms
```
|
Biasface/DDDC2
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
license: mit
language: ja
tags:
- luke
- pytorch
- transformers
- ner
- 固有表現抽出
- named entity recognition
- named-entity-recognition
---
# このモデルはluke-japanese-largeをファインチューニングして、固有表現抽出(NER)に用いれるようにしたものです。
このモデルはluke-japanese-largeを
Wikipediaを用いた日本語の固有表現抽出データセット(ストックマーク社、https://github.com/stockmarkteam/ner-wikipedia-dataset )を用いてファインチューニングしたものです。
固有表現抽出(NER)タスクに用いることができます。
# This model is fine-tuned model for Named-Entity-Recognition(NER) which is based on luke-japanese-large
This model is fine-tuned by using Wikipedia dataset.
You could use this model for NER tasks.
# モデルの精度 accuracy of model
全体:0.8453191098032002
||precision|recall|f1-score|support|
|-------------|-----|-----|-----|-----|
|その他の組織名|0.78|0.79|0.79|238|
|イベント名|0.83|0.88| 0.85 | 215|
|人名 | 0.88 | 0.89 | 0.89 | 546|
|地名 |0.83 | 0.85 | 0.84 | 440|
|政治的組織名 | 0.80 | 0.84 | 0.82 | 263|
|施設名 | 0.79 | 0.84 | 0.81 | 241|
|法人名 | 0.88 | 0.89 | 0.89 | 487|
|製品名 | 0.79 | 0.80 | 0.79 | 252|
|micro avg | 0.83 | 0.86 | 0.85 | 2682|
|macro avg | 0.82 | 0.85 | 0.83 | 2682|
|weighted avg | 0.83 | 0.86 | 0.85 | 2682|
# How to use 使い方
sentencepieceとtransformersをインストールして (pip install sentencepiece , pip install transformers)
以下のコードを実行することで、NERタスクを解かせることができます。
please execute this code.
```python
from transformers import MLukeTokenizer,pipeline, LukeForTokenClassification
tokenizer = MLukeTokenizer.from_pretrained('Mizuiro-sakura/luke-japanese-large-finetuned-ner')
model=LukeForTokenClassification.from_pretrained('Mizuiro-sakura/luke-japanese-large-finetuned-ner') # 学習済みモデルの読み込み
text=('昨日は東京で買い物をした')
ner=pipeline('ner', model=model, tokenizer=tokenizer)
result=ner(text)
print(result)
```
# what is Luke? Lukeとは?[1]
LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transformer. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. LUKE adopts an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores.
LUKE achieves state-of-the-art results on five popular NLP benchmarks including SQuAD v1.1 (extractive question answering), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), TACRED (relation classification), and Open Entity (entity typing). luke-japaneseは、単語とエンティティの知識拡張型訓練済み Transformer モデルLUKEの日本語版です。LUKE は単語とエンティティを独立したトークンとして扱い、これらの文脈を考慮した表現を出力します。
# Acknowledgments 謝辞
Lukeの開発者である山田先生とStudio ousiaさんには感謝いたします。 I would like to thank Mr.Yamada @ikuyamada and Studio ousia @StudioOusia.
# Citation
[1]@inproceedings{yamada2020luke, title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention}, author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto}, booktitle={EMNLP}, year={2020} }
|
BigSalmon/BertaMyWorda
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Sentiment_Analysis_RoBERTa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment_Analysis_RoBERTa
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5934
- Rmse: 0.6311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7173 | 2.0 | 500 | 0.5934 | 0.6311 |
| 0.4139 | 4.0 | 1000 | 0.6405 | 0.6015 |
| 0.1956 | 6.0 | 1500 | 0.8526 | 0.6122 |
| 0.0997 | 8.0 | 2000 | 1.1684 | 0.6089 |
| 0.0569 | 10.0 | 2500 | 1.2575 | 0.5986 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BigSalmon/GPT2HardArticleEasyArticle
|
[
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: apache-2.0
language: en
---
# LongT5 (transient-global attention, base-sized model)
LongT5 model pre-trained on English language. The model was introduced in the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) by Guo et al. and first released in [the LongT5 repository](https://github.com/google-research/longt5). All the model architecture and configuration can be found in [Flaxformer repository](https://github.com/google/flaxformer) which uses another Google research project repository [T5x](https://github.com/google-research/t5x).
Disclaimer: The team releasing LongT5 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
LongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting ([Pegasus-like generation pre-training](https://arxiv.org/pdf/1912.08777.pdf)). LongT5 model is an extension of [T5 model](https://arxiv.org/pdf/1910.10683.pdf), and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence.
LongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens).
## Intended uses & limitations
The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=longt5) to look for fine-tuned versions on a task that interests you.
### How to use
```python
from transformers import AutoTokenizer, LongT5Model
tokenizer = AutoTokenizer.from_pretrained("google/long-t5-tglobal-base")
model = LongT5Model.from_pretrained("google/long-t5-tglobal-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{guo2021longt5,
title={LongT5: Efficient Text-To-Text Transformer for Long Sequences},
author={Guo, Mandy and Ainslie, Joshua and Uthus, David and Ontanon, Santiago and Ni, Jianmo and Sung, Yun-Hsuan and Yang, Yinfei},
journal={arXiv preprint arXiv:2112.07916},
year={2021}
}
```
|
BigSalmon/GPT2HardandEasy
|
[
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
license: openrail
datasets:
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
pipeline_tag: question-answering
tags:
- movie
- suggestion
---
|
BigSalmon/GPTIntro
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: MlpPolicy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 233.32 +/- 25.42
name: mean_reward
verified: false
---
# **MlpPolicy** Agent playing **LunarLander-v2**
This is a trained model of a **MlpPolicy** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BigSalmon/GPTNeo350MInformalToFormalLincoln
|
[
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
license: apache-2.0
language: en
---
# LongT5 (transient-global attention, base-sized model)
LongT5 model pre-trained on English language. The model was introduced in the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) by Guo et al. and first released in [the LongT5 repository](https://github.com/google-research/longt5). All the model architecture and configuration can be found in [Flaxformer repository](https://github.com/google/flaxformer) which uses another Google research project repository [T5x](https://github.com/google-research/t5x).
Disclaimer: The team releasing LongT5 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
LongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting ([Pegasus-like generation pre-training](https://arxiv.org/pdf/1912.08777.pdf)). LongT5 model is an extension of [T5 model](https://arxiv.org/pdf/1910.10683.pdf), and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence.
LongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens).
## Intended uses & limitations
The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=longt5) to look for fine-tuned versions on a task that interests you.
### How to use
```python
from transformers import AutoTokenizer, LongT5Model
tokenizer = AutoTokenizer.from_pretrained("google/long-t5-tglobal-base")
model = LongT5Model.from_pretrained("google/long-t5-tglobal-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{guo2021longt5,
title={LongT5: Efficient Text-To-Text Transformer for Long Sequences},
author={Guo, Mandy and Ainslie, Joshua and Uthus, David and Ontanon, Santiago and Ni, Jianmo and Sung, Yun-Hsuan and Yang, Yinfei},
journal={arXiv preprint arXiv:2112.07916},
year={2021}
}
```
|
BigSalmon/GPTNeo350MInformalToFormalLincoln3
|
[
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: valve_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# valve_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4860
- Validation Loss: 6.0810
- Epoch: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 200, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.1291 | 5.9072 | 0 |
| 3.1205 | 5.9071 | 1 |
| 3.0615 | 5.9070 | 2 |
| 3.1662 | 5.9069 | 3 |
| 3.1011 | 5.9068 | 4 |
| 3.1374 | 5.9066 | 5 |
| 3.1472 | 5.9065 | 6 |
| 3.0926 | 5.9066 | 7 |
| 3.1436 | 5.9065 | 8 |
| 3.1321 | 5.9065 | 9 |
| 3.1027 | 5.9065 | 10 |
| 2.9848 | 5.9068 | 11 |
| 2.9544 | 5.9069 | 12 |
| 3.0212 | 5.9066 | 13 |
| 3.0448 | 5.9066 | 14 |
| 3.0455 | 5.9063 | 15 |
| 3.0294 | 5.9063 | 16 |
| 2.9529 | 5.9058 | 17 |
| 2.8377 | 5.9054 | 18 |
| 2.8682 | 5.9054 | 19 |
| 2.9745 | 5.9050 | 20 |
| 2.9680 | 5.9049 | 21 |
| 2.9270 | 5.9046 | 22 |
| 2.8955 | 5.9039 | 23 |
| 2.9627 | 5.9031 | 24 |
| 2.8304 | 5.9020 | 25 |
| 2.8542 | 5.9009 | 26 |
| 2.8008 | 5.8999 | 27 |
| 2.8067 | 5.8992 | 28 |
| 2.7471 | 5.8987 | 29 |
| 2.7494 | 5.8983 | 30 |
| 2.7467 | 5.8990 | 31 |
| 2.6482 | 5.9001 | 32 |
| 2.7226 | 5.9006 | 33 |
| 2.6202 | 5.9003 | 34 |
| 2.6576 | 5.9005 | 35 |
| 2.6144 | 5.9010 | 36 |
| 2.6040 | 5.9015 | 37 |
| 2.4523 | 5.9022 | 38 |
| 2.4589 | 5.9023 | 39 |
| 2.4796 | 5.9028 | 40 |
| 2.4962 | 5.9027 | 41 |
| 2.4251 | 5.9029 | 42 |
| 2.3685 | 5.9031 | 43 |
| 2.3015 | 5.9034 | 44 |
| 2.3080 | 5.9035 | 45 |
| 2.2066 | 5.9039 | 46 |
| 2.1621 | 5.9061 | 47 |
| 2.1354 | 5.9088 | 48 |
| 2.1527 | 5.9112 | 49 |
| 2.1650 | 5.9115 | 50 |
| 2.1298 | 5.9117 | 51 |
| 2.0993 | 5.9106 | 52 |
| 2.0044 | 5.9099 | 53 |
| 1.9764 | 5.9102 | 54 |
| 1.9662 | 5.9116 | 55 |
| 1.9702 | 5.9145 | 56 |
| 1.9012 | 5.9152 | 57 |
| 1.8061 | 5.9175 | 58 |
| 1.7831 | 5.9211 | 59 |
| 1.8015 | 5.9253 | 60 |
| 1.7642 | 5.9298 | 61 |
| 1.7484 | 5.9328 | 62 |
| 1.5452 | 5.9342 | 63 |
| 1.5996 | 5.9369 | 64 |
| 1.4831 | 5.9396 | 65 |
| 1.4367 | 5.9421 | 66 |
| 1.4981 | 5.9435 | 67 |
| 1.4513 | 5.9475 | 68 |
| 1.3897 | 5.9532 | 69 |
| 1.3108 | 5.9603 | 70 |
| 1.3337 | 5.9664 | 71 |
| 1.2564 | 5.9728 | 72 |
| 1.2671 | 5.9770 | 73 |
| 1.1286 | 5.9814 | 74 |
| 1.1349 | 5.9843 | 75 |
| 1.1645 | 5.9842 | 76 |
| 1.1462 | 5.9806 | 77 |
| 1.1028 | 5.9791 | 78 |
| 0.9843 | 5.9770 | 79 |
| 0.9734 | 5.9768 | 80 |
| 0.9831 | 5.9795 | 81 |
| 1.0021 | 5.9823 | 82 |
| 0.8903 | 5.9826 | 83 |
| 0.8244 | 5.9837 | 84 |
| 0.8597 | 5.9863 | 85 |
| 0.8703 | 5.9907 | 86 |
| 0.7864 | 5.9996 | 87 |
| 0.7394 | 6.0086 | 88 |
| 0.6764 | 6.0188 | 89 |
| 0.7007 | 6.0278 | 90 |
| 0.6247 | 6.0355 | 91 |
| 0.6640 | 6.0430 | 92 |
| 0.6407 | 6.0498 | 93 |
| 0.5903 | 6.0565 | 94 |
| 0.6226 | 6.0614 | 95 |
| 0.5934 | 6.0662 | 96 |
| 0.5140 | 6.0713 | 97 |
| 0.5300 | 6.0766 | 98 |
| 0.4860 | 6.0810 | 99 |
### Framework versions
- Transformers 4.29.0.dev0
- TensorFlow 2.9.1
- Datasets 2.5.1
- Tokenizers 0.13.3
|
BigSalmon/InformalToFormalLincoln17
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- ggml
datasets:
- the_pile_books3
inference: false
---
# WARNING: experimental
The code is still in constant flux.
~requires pr~ merged https://github.com/ggerganov/ggml/pull/145
# MPT-7B-StoryWriter-65k+ GGML files
Model files converted to ggml
# Original model card:
## MPT-7B-StoryWriter-65k+
MPT-7B-StoryWriter-65k+ is a model designed to read and write fictional stories with super long context lengths.
It was built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 84k tokens on a single node of 8 A100-80GB GPUs in our [blogpost](https://www.mosaicml.com/blog/mpt-7b).
* License: Apache 2.0
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-storywriter)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
### Model Date
May 5, 2023
### Model License
Apache 2.0
### Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | **65536** |
### PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
#### Training Configuration
This model was trained on 8 A100-80GBs for about 2 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
### Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-StoryWriter can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-StoryWriter was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
### Acknowledgements
This model was finetuned by Alex Trott and the MosaicML NLP team
### MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
### Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
### Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
BigSalmon/InformalToFormalLincoln19
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: kujaomega/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BigSalmon/InformalToFormalLincoln23
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xlsum-fi
metrics:
- rouge
model-index:
- name: my_awesome_mt5-sum-fi-v2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum-fi
type: xlsum-fi
config: finnish
split: test
args: finnish
metrics:
- name: Rouge1
type: rouge
value: 0.0179
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mt5-sum-fi-v2
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the xlsum-fi dataset.
It achieves the following results on the evaluation set:
- Loss: 7.8774
- Rouge1: 0.0179
- Rouge2: 0.0031
- Rougel: 0.0173
- Rougelsum: 0.0172
- Gen Len: 7.4543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 0.99 | 45 | 10.6680 | 0.0171 | 0.0037 | 0.0168 | 0.0166 | 7.1773 |
| No log | 1.99 | 90 | 8.7785 | 0.018 | 0.0035 | 0.0173 | 0.0173 | 7.4294 |
| No log | 2.98 | 135 | 8.1387 | 0.0193 | 0.0037 | 0.0186 | 0.0185 | 7.4044 |
| No log | 3.98 | 180 | 7.8774 | 0.0179 | 0.0031 | 0.0173 | 0.0172 | 7.4543 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BigSalmon/Lincoln4
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | 2023-05-11T13:07:13Z |
---
license: mit
tags:
- whisper.cpp
- ggml
- quantized_whisper_model
---
Quantized whisper model of https://github.com/ggerganov/whisper.cpp
|
BigSalmon/MrLincoln12
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
license: gpl-3.0
tags:
- text2text-generation
pipeline_tag: text2text-generation
language:
- zh
- en
---
Considering LLaMA's license constraints, the model is for research and learning only.
Please strictly respect LLaMA's usage policy. We are not allowed to publish weights for LLaMA, of course, even finetuned, but there is no problem publishing the difference, a patch that we suggest to apply to the files.
The encryption is a simple XOR between files, ensuring that only the people that have access to the original weights (from completely legal sources, of course) can transform them into finetuned weights.
You can find the decrypt code on https://github.com/LianjiaTech/BELLE/tree/main/models .
# Model Card for Model ID
## Welcome
If you find this model helpful, please *like* this model and star us on https://github.com/LianjiaTech/BELLE !
## Model description
This model comes from a two-phrase training on original LLaMA 13B.
1. Extending the vocabulary with additional 50K tokens specific for Chinese and further pretraining these word embeddings on Chinese corpus.
2. Full-parameter finetuning the model with 4M high-quality instruction-following examples.
## Download, Convert & Check
1. After you git clone this model
```
md5sum ./*
211b6252c73e638cb87e04edef1c91c6 config.json.7b4504868ddce248768954077a76ffe29a34c6cc2b4510426b4da77d1e9afb4c.enc
f9b33d359f17a437f6c24b4de6f2272e generation_config.json.fd7ff399e5568cc21a0a8414f43df88ef7c424995b9b97a90563165d2cf79efd.enc
07efffcfb738722f00c9b7ac81044bb9 pytorch_model-00001-of-00003.bin.1a523c0d01807d7fcde8d73537f09e346ff303a4769b8a6659114358621fc838.enc
fe66f8672c07e9e5bdfec4dd45e1e093 pytorch_model-00002-of-00003.bin.98e48fb6812bb87843c7276a85ed34124f67df5654d8cf0b6bb9302ecfe3a37f.enc
^@b3b4a0f1d6b399543d3d7ac50f9ce936 pytorch_model-00003-of-00003.bin.79921900f30a9ec501177fca2f593f90cb9f5ab235c05863cc4d384450cf3f6f.enc
7aef01bb265647be2a9acd1c7ea69bd8 pytorch_model.bin.index.json.af10ab40cc0368fba37018148447e3dcd9b72829a38e26c9eaf3eda3a7850b56.enc
34696bfce7b27548cfc2410e2b55762e special_tokens_map.json.96bdbb8504d9967606e5f661ccc7cbbac44a3661af863a7a58614670a0ccab33.enc
24e4f14cc3330576dcd1fd12760d35f3 tokenizer_config.json.2e333c3e1c77e7e9c6ceb573b02355deaf303ca8180bbac40f1d0405209ee457.enc
56724a79091f3d1877cca65c6412d646 tokenizer.model.0b716a618c9e7c45648f91d997431eba3b0ff111b17ce7b777280ed771a49f95.enc
```
2. Decrypt the files using the scripts in https://github.com/LianjiaTech/BELLE/tree/main/models
You can use the following command in Bash.
Please replace "/path/to_encrypted" with the path where you stored your encrypted file,
replace "/path/to_original_llama_7B" with the path where you stored your original llama7B file,
and replace "/path/to_finetuned_model" with the path where you want to save your final trained model.
```bash
mkdir /path/to_finetuned_model
for f in "/path/to_encrypted"/*; \
do if [ -f "$f" ]; then \
python3 decrypt.py "$f" "/path/to_original_llama_7B/consolidated.00.pth" "/path/to_finetuned_model/"; \
fi; \
done
```
After executing the aforementioned command, you will obtain the following files.
```
./config.json
./generation_config.json
./pytorch_model-00001-of-00002.bin
./pytorch_model-00002-of-00002.bin
./pytorch_model.bin.index.json
./special_tokens_map.json
./tokenizer_config.json
./tokenizer.model
```
3. Check md5sum
You can verify the integrity of these files by performing an MD5 checksum to ensure their complete recovery.
Here are the MD5 checksums for the relevant files:
```
md5sum ./*
1e28fe60969b1d4dcc3f97586082c5e5 config.json
2917a1cafb895cf57e746cfd7696bfe5 generation_config.json
2a8deacda3e22be63fe854da92006203 pytorch_model-00001-of-00003.bin
1bab042c86403f440517c8ae958716ed pytorch_model-00002-of-00003.bin
6fbd17996033fb5ec0263cdb07131de7 pytorch_model-00003-of-00003.bin
5762c0c9a1ca9366500390d0d335b2b6 pytorch_model.bin.index.json
15f7a943faa91a794f38dd81a212cb01 special_tokens_map.json
b87fab00f218c984135af5a0db353f22 tokenizer_config.json
6ffe559392973a92ea28032add2a8494 tokenizer.model
```
## Use model
Please note that the input should be formatted as follows in both **training** and **inference**.
``` python
Human: {input} \n\nBelle:
```
After you decrypt the files, BELLE-LLaMA-EXT-13B can be easily loaded with LlamaForCausalLM.
``` python
from transformers import LlamaForCausalLM, AutoTokenizer
import torch
ckpt = '/path/to_finetuned_model/'
device = torch.device('cuda')
model = LlamaForCausalLM.from_pretrained(ckpt, device_map='auto', low_cpu_mem_usage=True)
tokenizer = AutoTokenizer.from_pretrained(ckpt)
prompt = "Human: 写一首中文歌曲,赞美大自然 \n\nBelle: "
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generate_ids = model.generate(input_ids, max_new_tokens=300, do_sample = True, top_k = 30, top_p = 0.85, temperature = 0.5,repetition_penalty=1.2, eos_token_id=2, bos_token_id=1, pad_token_id=0)
output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
response = output[len(prompt):]
print(response)
```
## Limitations
There still exists a few issues in the model trained on current base model and data:
1. The model might generate factual errors when asked to follow instructions related to facts.
2. Occasionally generates harmful responses since the model still struggles to identify potential harmful instructions.
3. Needs improvements on reasoning and coding.
Since the model still has its limitations, we require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.
## Citation
Please cite our paper and github when using our code, data or model.
```
@misc{ji2023better,
title={Towards Better Instruction Following Language Models for Chinese: Investigating the Impact of Training Data and Evaluation},
author={Yunjie Ji and Yan Gong and Yong Deng and Yiping Peng and Qiang Niu and Baochang Ma and Xiangang Li},
year={2023},
eprint={2304.07854},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{BELLE,
author = {Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Baochang Ma, Xiangang Li},
title = {BELLE: Be Everyone's Large Language model Engine},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LianjiaTech/BELLE}},
}
```
|
BlightZz/DialoGPT-medium-Kurisu
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 19 | 2023-05-11T14:38:51Z |
---
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie7
This model is a fine-tuned version of [Bennet1996/donut-base-sroie6](https://huggingface.co/Bennet1996/donut-base-sroie6) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BrianTin/MTBERT
|
[
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# csalaam/bias-classification-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("csalaam/bias-classification-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Broadus20/DialoGPT-small-joshua
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.30 +/- 18.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 449 | 2023-05-11T15:55:58Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1483.07 +/- 260.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 45 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### DNNS_FACE Dreambooth model trained by WiNE-iNEFF
TOKEN: dnns or dnns face
Sample pictures of this concept:




|
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 63 | null |
---
language:
- en
metrics:
- f1
pipeline_tag: text-classification
tags:
- multilabel
---
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 132 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: albert-fa-zwnj-base-v2-finetuned-address-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-fa-zwnj-base-v2-finetuned-address-v1
This model is a fine-tuned version of [HooshvareLab/albert-fa-zwnj-base-v2](https://huggingface.co/HooshvareLab/albert-fa-zwnj-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.8547 | 1.0 | 6021 | 1.5840 |
| 1.4936 | 2.0 | 12042 | 1.4392 |
| 1.4068 | 3.0 | 18063 | 1.3984 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,862 | 2023-05-11T16:05:31Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.09 +/- 15.78
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 855 | null |
---
tags:
- generated_from_trainer
model-index:
- name: kaz_legal_bert_full__1000000_wc__0.2_epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kaz_legal_bert_full__1000000_wc__0.2_epoch
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- epoch: 0.17
- eval_loss: 2.1597
- eval_runtime: 18944.3891
- eval_samples_per_second: 97.589
- eval_steps_per_second: 12.199
- step: 20000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi
|
[
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 71 | null |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: TootToot/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf
|
[
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 21 | null |
---
license: apache-2.0
---
Generated by using scripts from https://gitee.com/yhyu13/llama_-tools
Delta weights: https://huggingface.co/FreedomIntelligence/chimera-inst-chat-13b-delta
GPTQ 4-bit weights : https://huggingface.co/Yhyu13/chimera-inst-chat-13b-gptq-4bit/tree/main
Chimera-inst 13B is proclamined by its creator to score 97% of ChatGPT evaluated by GPT-4.
|
CBreit00/DialoGPT_small_Rick
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: MultiBert_TOdrugReview
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MultiBert_TOdrugReview
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1 Micro: 1.0
- F1 Macro: 1.0
- Precision Micro: 1.0
- Precision Macro: 1.0
- Recall Micro: 1.0
- Recall Macro: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | F1 Macro | Precision Micro | Precision Macro | Recall Micro | Recall Macro |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:--------:|:---------------:|:---------------:|:------------:|:------------:|
| 0.0 | 1.0 | 6909 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 2.0 | 13818 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CLEE/CLEE
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "DhjS1gNVuJqXUWaOdPYnZ4A5k"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/DhjS1gNVuJqXUWaOdPYnZ4A5k)
Credits: [View credits](https://civitai.com/?query=model_search)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "DhjS1gNVuJqXUWaOdPYnZ4A5k",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
CLTL/icf-levels-enr
|
[
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30 | null |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# stablediffusion15 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "stablediffusion15"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/stablediffusion15)
Credits: [View credits](https://civitai.com/?query=stablediffusion15)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "stablediffusion15",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
CTBC/ATS
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### NewModel Dreambooth model trained by Anna11 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Calamarii/calamari
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-05-11T17:22:10Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# tollefj/setfit-nocola-20-iter-25-epochs
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("tollefj/setfit-nocola-20-iter-25-epochs")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Cameron/BERT-SBIC-offensive
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 31 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.00 +/- 0.27
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Cameron/BERT-jigsaw-severetoxic
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### himanshu_sb Dreambooth model trained by himmug with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Carlork314/Xd
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: pop-ultmpxp-v4
---
### pop-ultmpxp-v4 Dreambooth model trained by wimvanhenden with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
pop-ultmpxp-v4 (use that on your prompt)

|
CarlosPR/mt5-spanish-memmories-analysis
|
[
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: ultmpxp
---
### ultmpxp-both-10-10-v4 Dreambooth model trained by wimvanhenden with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
ultmpxp (use that on your prompt)

|
CasualHomie/DialoGPT-small-harrypotter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: ultmpxp-object10-v4
---
### ultmpxp-object10-v4 Dreambooth model trained by wimvanhenden with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
ultmpxp-object10-v4 (use that on your prompt)

|
Cathy/reranking_model
|
[
"pytorch",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 27 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### libyajarclo Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Cedille/fr-boris
|
[
"pytorch",
"gptj",
"text-generation",
"fr",
"dataset:c4",
"arxiv:2202.03371",
"transformers",
"causal-lm",
"license:mit",
"has_space"
] |
text-generation
|
{
"architectures": [
"GPTJForCausalLM"
],
"model_type": "gptj",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 401 | null |
---
language:
- ru
---
This model is one of the result of my bachelor's thesis. It's main purpose is to detect semantic types of columns in tables containing Russian text.
Also it can be used as table to vec encoder for downstream tasks.
You can find more info in this github repo https://github.com/Elluran/rudoduo.
Also check out streamlit demo https://rudoduo.streamlit.app/
|
dccuchile/albert-base-spanish-finetuned-ner
|
[
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.71 +/- 0.26
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dccuchile/albert-tiny-spanish-finetuned-ner
|
[
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: afos950/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dccuchile/albert-xlarge-spanish-finetuned-mldoc
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26 | null |
---
license: creativeml-openrail-m
tags:
- music
---
<div align="center">
<h1>sovits4.0 Model</h1>
<img src="https://static.zerochan.net/Konoe.Kanata.full.3012444.jpg" height="200" alt="emu">
<h1>the Model is</h1>
<h1>Kanata Konoe (CV : Akari Kito) from Love Live! Nijigasaki</h1>
|
dccuchile/albert-xlarge-spanish-finetuned-ner
|
[
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
library_name: stable-baselines3
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **CartPole-v1**
This is a trained model of a **PPO** agent playing **CartPole-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dccuchile/albert-xxlarge-spanish-finetuned-pos
|
[
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### tripolicasly Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
dccuchile/albert-xxlarge-spanish-finetuned-xnli
|
[
"pytorch",
"albert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 68 | null |
---
license: gpl-2.0
---
[](https://huggingface.co/Innokentiy)
# FlowerNet
## Нейросеть для многоклассовой классификации цветов.

## Введение
Цель данной работы заключается в разработке нейронной сети для многоклассовой классификации, обладающей **высокой устойчивостью** к переобучению.
## Набор данных (Dataset)
Для решения задачи многоклассовой классификации цветов, я использовал набор данных tf_flowers из tensorflow.
Набор имеет 5 классов цветов: 'Одуванчик', 'Ромашка', 'Тюльпаны', 'Подсолнухи' и 'Розы'. Поэтому на конечном слое Dense 5 нейронов. Теперь про выборки. Я разбил набор данных на три выборки: от 0 до 80% - тренировочная, от 80% до 90% - проверочная(валидационная) и от 90% до 100% - тестовая.
## Архитектура сети
К качестве архитектуры я использовал xception. Схема архитектуры получилась большая, поэтому я решил не вставлять ей сюда, а загрузить в файлы проекта.
Нейронная сеть предназначена для работы на тензорных процессорах (TPU), это позволяет повысить количество эпох и мощность.
## Оптимизатор и функция потерь

Моей целью было создать крепкую нейронную сеть, которая обладала бы высокой устойчивостью к переобучению.
И тут начинается настройка.
Если использовать оптимизатор Adam, который я использовал ранее, то точность будет 90%, но при этом будет переобучение. Поэтому я решил зайти с другого бока, и использовать оптимизатор Adagrad(Adaptive Gradient) - его точность на 10 эпохе была 40%, но чем больше эпох, тем лучше его точность, и при этом точность проверочной выборки будет всегда выше чем тренировочной, и переобучения не будет. В качестве функции потерь я использую SparseCategoricalCrossentropy, так как именно её нужно использовать на TPU моделях. Так как модель моя модель использует тензорный процессор и быстро проходит эпохи, я решил увеличить количество эпох до тысячи. Adagrad начал с 40%, постепенно его точность увеличивалась, и в конечном итоге я получил точность 89.65% на проверочных данных и 0.87% на тестовых. При этом на графике можно увидеть, что модель не подвергается переобучению.
## Результат

Задача выполнена. Я создал модель которая имеет устойчивую защиту от переобучения и хорошую точность 87%.
В файлах проекта модель называется FlowerNet.h5
Страница на github: https://github.com/laf3r/FlowerNet
>Программа предоставляется в виде открытого исходного кода.
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-pawsx
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 24 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: covid-tweets-sentiment-analysis-distilbert-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-tweets-sentiment-analysis-distilbert-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5979
- Rmse: 0.6680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7464 | 2.0 | 500 | 0.5979 | 0.6680 |
| 0.4318 | 4.0 | 1000 | 0.6374 | 0.6327 |
| 0.1694 | 6.0 | 1500 | 0.9439 | 0.6311 |
| 0.072 | 8.0 | 2000 | 1.1471 | 0.6556 |
| 0.0388 | 10.0 | 2500 | 1.2217 | 0.6437 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dccuchile/distilbert-base-spanish-uncased-finetuned-ner
|
[
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2522
- Wer: 23.1797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 2.1114 | 0.0 | 1 | 2.3698 | 75.1864 |
| 0.3272 | 0.29 | 1000 | 0.4182 | 37.7505 |
| 0.251 | 0.58 | 2000 | 0.3408 | 30.9679 |
| 0.2207 | 0.88 | 3000 | 0.3059 | 28.3058 |
| 0.1779 | 1.17 | 4000 | 0.2890 | 26.7555 |
| 0.1691 | 1.46 | 5000 | 0.2742 | 25.2099 |
| 0.1622 | 1.75 | 6000 | 0.2645 | 24.6840 |
| 0.1397 | 2.04 | 7000 | 0.2587 | 23.8812 |
| 0.1394 | 2.34 | 8000 | 0.2562 | 23.6586 |
| 0.1361 | 2.63 | 9000 | 0.2536 | 23.4633 |
| 0.1356 | 2.92 | 10000 | 0.2522 | 23.1797 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
ChaseBread/DialoGPT-small-harrypotter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | 2023-05-16T08:06:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: masked-sentence-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# masked-sentence-generation
This model is a fine-tuned version of [TasmiaAzmi/masked-sentence-generation](https://huggingface.co/TasmiaAzmi/masked-sentence-generation) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8159 | 0.98 | 100 | 2.6620 |
| 2.7161 | 1.96 | 200 | 2.6630 |
| 2.6516 | 2.94 | 300 | 2.6663 |
| 2.6142 | 3.92 | 400 | 2.6685 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.11.0
|
CheonggyeMountain-Sherpa/kogpt-trinity-poem
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 15 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-0_6-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-0_6-finetuned-squad
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.3732 | 1.0 | 4209 | 1.0180 |
| 1.0202 | 2.0 | 8418 | 0.7085 |
| 0.7979 | 3.0 | 12627 | 0.5981 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.0.0+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
|
Chun/w-en2zh-hsk
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
wrongname idk how to change the name https://civitai.com/models/55330?modelVersionId=59707
|
Chun/w-en2zh-otm
|
[
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
tags:
- generated_from_keras_callback
model-index:
- name: Bert_class_1e-06_266epoch
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Bert_class_1e-06_266epoch
This model is a fine-tuned version of [guoluo/Bert_1.5e_07](https://huggingface.co/guoluo/Bert_1.5e_07) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0213
- Train Accuracy: 0.9976
- Validation Loss: 1.4092
- Validation Accuracy: 0.7254
- Train Lr: 9.653716e-07
- Epoch: 265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 9.653716e-07, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Train Lr | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:------------:|:-----:|
| 1.2823 | 0.4776 | 1.0993 | 0.6761 | 1e-06 | 0 |
| 1.0339 | 0.6776 | 0.9839 | 0.6761 | 9.99999e-07 | 1 |
| 0.9705 | 0.6776 | 0.9658 | 0.6761 | 9.999969e-07 | 2 |
| 0.9486 | 0.6776 | 0.9590 | 0.6761 | 9.99994e-07 | 3 |
| 0.9369 | 0.6776 | 0.9544 | 0.6761 | 9.9999e-07 | 4 |
| 0.9332 | 0.6776 | 0.9470 | 0.6761 | 9.99985e-07 | 5 |
| 0.9205 | 0.6776 | 0.9421 | 0.6761 | 9.99979e-07 | 6 |
| 0.9135 | 0.6776 | 0.9374 | 0.6761 | 9.999719e-07 | 7 |
| 0.9113 | 0.6776 | 0.9340 | 0.6761 | 9.99964e-07 | 8 |
| 0.9005 | 0.6776 | 0.9294 | 0.6761 | 9.99955e-07 | 9 |
| 0.8896 | 0.6776 | 0.9242 | 0.6761 | 9.99945e-07 | 10 |
| 0.8746 | 0.6800 | 0.9191 | 0.6761 | 9.99934e-07 | 11 |
| 0.8649 | 0.6824 | 0.9143 | 0.6761 | 9.999219e-07 | 12 |
| 0.8621 | 0.6847 | 0.9095 | 0.6761 | 9.999089e-07 | 13 |
| 0.8506 | 0.6847 | 0.9019 | 0.6761 | 9.99895e-07 | 14 |
| 0.8434 | 0.6800 | 0.8943 | 0.6761 | 9.9988e-07 | 15 |
| 0.8286 | 0.6871 | 0.8885 | 0.6761 | 9.998639e-07 | 16 |
| 0.8239 | 0.6824 | 0.8814 | 0.6761 | 9.998469e-07 | 17 |
| 0.8181 | 0.6894 | 0.8785 | 0.6761 | 9.998289e-07 | 18 |
| 0.7962 | 0.6894 | 0.8731 | 0.6690 | 9.998099e-07 | 19 |
| 0.7908 | 0.7012 | 0.8671 | 0.6690 | 9.997899e-07 | 20 |
| 0.7640 | 0.6988 | 0.8641 | 0.6761 | 9.997689e-07 | 21 |
| 0.7644 | 0.7035 | 0.8590 | 0.6831 | 9.997469e-07 | 22 |
| 0.7512 | 0.7200 | 0.8558 | 0.6831 | 9.99724e-07 | 23 |
| 0.7394 | 0.7200 | 0.8527 | 0.6972 | 9.997e-07 | 24 |
| 0.7366 | 0.7271 | 0.8501 | 0.7113 | 9.99675e-07 | 25 |
| 0.7293 | 0.7247 | 0.8471 | 0.7042 | 9.996489e-07 | 26 |
| 0.7189 | 0.7529 | 0.8479 | 0.7113 | 9.99622e-07 | 27 |
| 0.7077 | 0.7341 | 0.8411 | 0.7183 | 9.99594e-07 | 28 |
| 0.6965 | 0.7671 | 0.8409 | 0.7183 | 9.99565e-07 | 29 |
| 0.6838 | 0.7482 | 0.8372 | 0.7113 | 9.99535e-07 | 30 |
| 0.6835 | 0.7506 | 0.8362 | 0.7113 | 9.99504e-07 | 31 |
| 0.6702 | 0.7812 | 0.8365 | 0.6901 | 9.99472e-07 | 32 |
| 0.6623 | 0.7812 | 0.8323 | 0.7113 | 9.994391e-07 | 33 |
| 0.6565 | 0.7553 | 0.8298 | 0.6972 | 9.994051e-07 | 34 |
| 0.6452 | 0.7718 | 0.8291 | 0.6901 | 9.993701e-07 | 35 |
| 0.6396 | 0.7718 | 0.8285 | 0.7113 | 9.993341e-07 | 36 |
| 0.6299 | 0.7765 | 0.8262 | 0.6831 | 9.992972e-07 | 37 |
| 0.6230 | 0.7953 | 0.8364 | 0.7113 | 9.992592e-07 | 38 |
| 0.6095 | 0.7741 | 0.8233 | 0.7113 | 9.992202e-07 | 39 |
| 0.6193 | 0.7718 | 0.8206 | 0.7113 | 9.991802e-07 | 40 |
| 0.6008 | 0.7859 | 0.8260 | 0.7254 | 9.991393e-07 | 41 |
| 0.5967 | 0.7859 | 0.8199 | 0.7254 | 9.990973e-07 | 42 |
| 0.5883 | 0.7835 | 0.8189 | 0.7183 | 9.990544e-07 | 43 |
| 0.5751 | 0.8071 | 0.8279 | 0.7324 | 9.990104e-07 | 44 |
| 0.5709 | 0.8000 | 0.8204 | 0.7324 | 9.989654e-07 | 45 |
| 0.5697 | 0.8047 | 0.8229 | 0.7254 | 9.989195e-07 | 46 |
| 0.5580 | 0.8094 | 0.8152 | 0.7254 | 9.988726e-07 | 47 |
| 0.5595 | 0.8071 | 0.8275 | 0.7324 | 9.988246e-07 | 48 |
| 0.5486 | 0.7929 | 0.8168 | 0.7324 | 9.987757e-07 | 49 |
| 0.5400 | 0.8094 | 0.8239 | 0.7254 | 9.987258e-07 | 50 |
| 0.5352 | 0.8071 | 0.8190 | 0.7183 | 9.986749e-07 | 51 |
| 0.5141 | 0.8235 | 0.8171 | 0.7183 | 9.986229e-07 | 52 |
| 0.5324 | 0.8024 | 0.8191 | 0.7183 | 9.985699e-07 | 53 |
| 0.5123 | 0.8024 | 0.8279 | 0.7254 | 9.98516e-07 | 54 |
| 0.5151 | 0.8165 | 0.8213 | 0.7113 | 9.984611e-07 | 55 |
| 0.4986 | 0.8118 | 0.8176 | 0.7183 | 9.984052e-07 | 56 |
| 0.4925 | 0.8259 | 0.8208 | 0.7113 | 9.983482e-07 | 57 |
| 0.4848 | 0.8188 | 0.8182 | 0.7042 | 9.982904e-07 | 58 |
| 0.4952 | 0.8282 | 0.8214 | 0.7113 | 9.982315e-07 | 59 |
| 0.4837 | 0.8329 | 0.8192 | 0.7113 | 9.981716e-07 | 60 |
| 0.4513 | 0.8518 | 0.8224 | 0.7183 | 9.981106e-07 | 61 |
| 0.4628 | 0.8376 | 0.8227 | 0.7183 | 9.980488e-07 | 62 |
| 0.4633 | 0.8447 | 0.8246 | 0.7183 | 9.979859e-07 | 63 |
| 0.4472 | 0.8447 | 0.8256 | 0.7113 | 9.97922e-07 | 64 |
| 0.4529 | 0.8306 | 0.8285 | 0.7183 | 9.978571e-07 | 65 |
| 0.4579 | 0.8329 | 0.8331 | 0.7042 | 9.977913e-07 | 66 |
| 0.4326 | 0.8376 | 0.8278 | 0.7113 | 9.977244e-07 | 67 |
| 0.4255 | 0.8447 | 0.8265 | 0.7113 | 9.976566e-07 | 68 |
| 0.4322 | 0.8494 | 0.8293 | 0.7042 | 9.975878e-07 | 69 |
| 0.4189 | 0.8424 | 0.8382 | 0.7042 | 9.97518e-07 | 70 |
| 0.4236 | 0.8494 | 0.8302 | 0.7113 | 9.974472e-07 | 71 |
| 0.4025 | 0.8494 | 0.8364 | 0.7042 | 9.973753e-07 | 72 |
| 0.4225 | 0.8659 | 0.8370 | 0.7113 | 9.973025e-07 | 73 |
| 0.4027 | 0.8541 | 0.8377 | 0.7042 | 9.972288e-07 | 74 |
| 0.4090 | 0.8588 | 0.8381 | 0.7113 | 9.97154e-07 | 75 |
| 0.3887 | 0.8682 | 0.8378 | 0.7042 | 9.970781e-07 | 76 |
| 0.4022 | 0.8706 | 0.8406 | 0.7042 | 9.970014e-07 | 77 |
| 0.3867 | 0.8682 | 0.8457 | 0.7113 | 9.969236e-07 | 78 |
| 0.3689 | 0.8706 | 0.8460 | 0.7113 | 9.968448e-07 | 79 |
| 0.3728 | 0.8729 | 0.8527 | 0.7042 | 9.967652e-07 | 80 |
| 0.3754 | 0.8706 | 0.8525 | 0.7042 | 9.966844e-07 | 81 |
| 0.3580 | 0.8871 | 0.8531 | 0.7113 | 9.966027e-07 | 82 |
| 0.3718 | 0.8659 | 0.8593 | 0.7042 | 9.965199e-07 | 83 |
| 0.3535 | 0.8800 | 0.8593 | 0.7324 | 9.964363e-07 | 84 |
| 0.3342 | 0.8824 | 0.8704 | 0.6972 | 9.963516e-07 | 85 |
| 0.3341 | 0.8918 | 0.8630 | 0.7324 | 9.962658e-07 | 86 |
| 0.3371 | 0.8776 | 0.8698 | 0.7042 | 9.961792e-07 | 87 |
| 0.3338 | 0.8847 | 0.8689 | 0.7042 | 9.960916e-07 | 88 |
| 0.3295 | 0.8776 | 0.8753 | 0.6972 | 9.960029e-07 | 89 |
| 0.3259 | 0.8847 | 0.8696 | 0.7183 | 9.959133e-07 | 90 |
| 0.3290 | 0.8776 | 0.8726 | 0.7183 | 9.958227e-07 | 91 |
| 0.3117 | 0.8988 | 0.8798 | 0.7324 | 9.95731e-07 | 92 |
| 0.3075 | 0.8965 | 0.8836 | 0.7254 | 9.956385e-07 | 93 |
| 0.2905 | 0.9129 | 0.8868 | 0.7183 | 9.95545e-07 | 94 |
| 0.2979 | 0.9153 | 0.8888 | 0.7183 | 9.954504e-07 | 95 |
| 0.3031 | 0.8800 | 0.8956 | 0.7324 | 9.953548e-07 | 96 |
| 0.2883 | 0.9035 | 0.8984 | 0.7042 | 9.952582e-07 | 97 |
| 0.2835 | 0.9106 | 0.8969 | 0.7254 | 9.951607e-07 | 98 |
| 0.2803 | 0.9059 | 0.8998 | 0.7254 | 9.950621e-07 | 99 |
| 0.2812 | 0.9176 | 0.9034 | 0.7254 | 9.949626e-07 | 100 |
| 0.2714 | 0.9153 | 0.9028 | 0.7183 | 9.948621e-07 | 101 |
| 0.2905 | 0.9059 | 0.9144 | 0.7254 | 9.947606e-07 | 102 |
| 0.2631 | 0.9224 | 0.9143 | 0.6972 | 9.946582e-07 | 103 |
| 0.2679 | 0.9176 | 0.9180 | 0.7254 | 9.945547e-07 | 104 |
| 0.2583 | 0.9224 | 0.9206 | 0.7042 | 9.944504e-07 | 105 |
| 0.2613 | 0.9200 | 0.9286 | 0.7254 | 9.94345e-07 | 106 |
| 0.2669 | 0.9012 | 0.9237 | 0.7254 | 9.942386e-07 | 107 |
| 0.2571 | 0.9153 | 0.9351 | 0.7254 | 9.941313e-07 | 108 |
| 0.2570 | 0.9106 | 0.9306 | 0.7324 | 9.940229e-07 | 109 |
| 0.2344 | 0.9200 | 0.9396 | 0.7183 | 9.939135e-07 | 110 |
| 0.2359 | 0.9271 | 0.9369 | 0.7394 | 9.938033e-07 | 111 |
| 0.2395 | 0.9271 | 0.9522 | 0.7042 | 9.93692e-07 | 112 |
| 0.2408 | 0.9247 | 0.9509 | 0.7183 | 9.935796e-07 | 113 |
| 0.2330 | 0.9294 | 0.9561 | 0.7042 | 9.934664e-07 | 114 |
| 0.2247 | 0.9271 | 0.9539 | 0.7183 | 9.933522e-07 | 115 |
| 0.2192 | 0.9318 | 0.9705 | 0.7042 | 9.93237e-07 | 116 |
| 0.2173 | 0.9341 | 0.9621 | 0.7254 | 9.931208e-07 | 117 |
| 0.2138 | 0.9200 | 0.9679 | 0.7183 | 9.930036e-07 | 118 |
| 0.2239 | 0.9176 | 0.9733 | 0.6972 | 9.928855e-07 | 119 |
| 0.2188 | 0.9341 | 0.9838 | 0.7042 | 9.927663e-07 | 120 |
| 0.2116 | 0.9341 | 0.9764 | 0.7324 | 9.926462e-07 | 121 |
| 0.2061 | 0.9200 | 0.9840 | 0.7183 | 9.925251e-07 | 122 |
| 0.2061 | 0.9435 | 0.9798 | 0.7254 | 9.92403e-07 | 123 |
| 0.2049 | 0.9388 | 1.0056 | 0.7042 | 9.9228e-07 | 124 |
| 0.1947 | 0.9459 | 0.9898 | 0.7254 | 9.92156e-07 | 125 |
| 0.1990 | 0.9365 | 0.9935 | 0.6972 | 9.92031e-07 | 126 |
| 0.1945 | 0.9506 | 0.9997 | 0.7113 | 9.91905e-07 | 127 |
| 0.1955 | 0.9365 | 0.9972 | 0.7254 | 9.91778e-07 | 128 |
| 0.1845 | 0.9459 | 1.0044 | 0.7254 | 9.916502e-07 | 129 |
| 0.1722 | 0.9388 | 1.0057 | 0.7183 | 9.915212e-07 | 130 |
| 0.1693 | 0.9576 | 1.0118 | 0.7113 | 9.913914e-07 | 131 |
| 0.1837 | 0.9318 | 1.0126 | 0.7113 | 9.912605e-07 | 132 |
| 0.1894 | 0.9412 | 1.0254 | 0.6972 | 9.911287e-07 | 133 |
| 0.1702 | 0.9506 | 1.0156 | 0.7254 | 9.909959e-07 | 134 |
| 0.1697 | 0.9576 | 1.0184 | 0.7183 | 9.908621e-07 | 135 |
| 0.1694 | 0.9459 | 1.0179 | 0.7394 | 9.907274e-07 | 136 |
| 0.1587 | 0.9553 | 1.0255 | 0.7183 | 9.905916e-07 | 137 |
| 0.1590 | 0.9576 | 1.0308 | 0.7324 | 9.90455e-07 | 138 |
| 0.1670 | 0.9576 | 1.0376 | 0.7254 | 9.903173e-07 | 139 |
| 0.1606 | 0.9482 | 1.0405 | 0.7254 | 9.901787e-07 | 140 |
| 0.1605 | 0.9576 | 1.0468 | 0.7324 | 9.900391e-07 | 141 |
| 0.1476 | 0.9624 | 1.0470 | 0.7183 | 9.898986e-07 | 142 |
| 0.1493 | 0.9553 | 1.0530 | 0.7183 | 9.89757e-07 | 143 |
| 0.1292 | 0.9718 | 1.0573 | 0.7183 | 9.896146e-07 | 144 |
| 0.1393 | 0.9694 | 1.0655 | 0.7183 | 9.894711e-07 | 145 |
| 0.1458 | 0.9529 | 1.0627 | 0.7324 | 9.893266e-07 | 146 |
| 0.1319 | 0.9694 | 1.0809 | 0.7042 | 9.891812e-07 | 147 |
| 0.1358 | 0.9624 | 1.0716 | 0.7254 | 9.890348e-07 | 148 |
| 0.1514 | 0.9624 | 1.0863 | 0.7113 | 9.888875e-07 | 149 |
| 0.1384 | 0.9624 | 1.0777 | 0.7324 | 9.887391e-07 | 150 |
| 0.1286 | 0.9694 | 1.0907 | 0.7113 | 9.885898e-07 | 151 |
| 0.1316 | 0.9694 | 1.0914 | 0.7183 | 9.884395e-07 | 152 |
| 0.1310 | 0.9671 | 1.0933 | 0.7183 | 9.882883e-07 | 153 |
| 0.1331 | 0.9647 | 1.0940 | 0.7254 | 9.881361e-07 | 154 |
| 0.1225 | 0.9718 | 1.0998 | 0.7183 | 9.87983e-07 | 155 |
| 0.1176 | 0.9718 | 1.1027 | 0.7183 | 9.878289e-07 | 156 |
| 0.1205 | 0.9671 | 1.1042 | 0.7183 | 9.876738e-07 | 157 |
| 0.1295 | 0.9647 | 1.1100 | 0.7183 | 9.875179e-07 | 158 |
| 0.1097 | 0.9718 | 1.1243 | 0.7183 | 9.873609e-07 | 159 |
| 0.1072 | 0.9812 | 1.1196 | 0.7183 | 9.87203e-07 | 160 |
| 0.1063 | 0.9788 | 1.1262 | 0.7254 | 9.87044e-07 | 161 |
| 0.1208 | 0.9647 | 1.1248 | 0.7042 | 9.868842e-07 | 162 |
| 0.1120 | 0.9694 | 1.1296 | 0.7183 | 9.867233e-07 | 163 |
| 0.1123 | 0.9694 | 1.1367 | 0.7183 | 9.865615e-07 | 164 |
| 0.0972 | 0.9882 | 1.1382 | 0.7183 | 9.863987e-07 | 165 |
| 0.1175 | 0.9647 | 1.1515 | 0.7254 | 9.86235e-07 | 166 |
| 0.1136 | 0.9741 | 1.1551 | 0.7183 | 9.860704e-07 | 167 |
| 0.0929 | 0.9859 | 1.1558 | 0.7183 | 9.859048e-07 | 168 |
| 0.0895 | 0.9812 | 1.1637 | 0.7183 | 9.857382e-07 | 169 |
| 0.1013 | 0.9718 | 1.1599 | 0.7183 | 9.855706e-07 | 170 |
| 0.1026 | 0.9718 | 1.1607 | 0.7183 | 9.854022e-07 | 171 |
| 0.0983 | 0.9788 | 1.1601 | 0.7254 | 9.852326e-07 | 172 |
| 0.0809 | 0.9882 | 1.1673 | 0.7183 | 9.850622e-07 | 173 |
| 0.0923 | 0.9765 | 1.1763 | 0.7254 | 9.848909e-07 | 174 |
| 0.0840 | 0.9835 | 1.1775 | 0.7254 | 9.847186e-07 | 175 |
| 0.0887 | 0.9812 | 1.1881 | 0.7254 | 9.845453e-07 | 176 |
| 0.0922 | 0.9718 | 1.1893 | 0.7254 | 9.84371e-07 | 177 |
| 0.0794 | 0.9882 | 1.1944 | 0.7254 | 9.841958e-07 | 178 |
| 0.0826 | 0.9835 | 1.2019 | 0.7113 | 9.840197e-07 | 179 |
| 0.0725 | 0.9929 | 1.1993 | 0.7254 | 9.838426e-07 | 180 |
| 0.0727 | 0.9929 | 1.2000 | 0.7113 | 9.836646e-07 | 181 |
| 0.0759 | 0.9859 | 1.2061 | 0.7254 | 9.834856e-07 | 182 |
| 0.0945 | 0.9788 | 1.2160 | 0.7113 | 9.833057e-07 | 183 |
| 0.0796 | 0.9812 | 1.2021 | 0.7254 | 9.831248e-07 | 184 |
| 0.0792 | 0.9835 | 1.2152 | 0.7183 | 9.829429e-07 | 185 |
| 0.0803 | 0.9859 | 1.2169 | 0.7183 | 9.827601e-07 | 186 |
| 0.0835 | 0.9812 | 1.2237 | 0.7183 | 9.825764e-07 | 187 |
| 0.0680 | 0.9859 | 1.2224 | 0.7113 | 9.823916e-07 | 188 |
| 0.0898 | 0.9812 | 1.2188 | 0.7183 | 9.82206e-07 | 189 |
| 0.0780 | 0.9788 | 1.2196 | 0.7113 | 9.820194e-07 | 190 |
| 0.0759 | 0.9835 | 1.2473 | 0.6901 | 9.818318e-07 | 191 |
| 0.0915 | 0.9694 | 1.2324 | 0.7042 | 9.816433e-07 | 192 |
| 0.0767 | 0.9859 | 1.2285 | 0.7042 | 9.814539e-07 | 193 |
| 0.0663 | 0.9906 | 1.2300 | 0.7113 | 9.812636e-07 | 194 |
| 0.0795 | 0.9835 | 1.2481 | 0.7042 | 9.810723e-07 | 195 |
| 0.0686 | 0.9882 | 1.2451 | 0.7042 | 9.8088e-07 | 196 |
| 0.0702 | 0.9835 | 1.2363 | 0.7113 | 9.806869e-07 | 197 |
| 0.0751 | 0.9812 | 1.2419 | 0.7113 | 9.804927e-07 | 198 |
| 0.0680 | 0.9859 | 1.2398 | 0.7113 | 9.802976e-07 | 199 |
| 0.0543 | 0.9882 | 1.2477 | 0.7042 | 9.801016e-07 | 200 |
| 0.0666 | 0.9835 | 1.2703 | 0.6972 | 9.799047e-07 | 201 |
| 0.0704 | 0.9859 | 1.2476 | 0.7042 | 9.797068e-07 | 202 |
| 0.0634 | 0.9859 | 1.2609 | 0.7042 | 9.79508e-07 | 203 |
| 0.0650 | 0.9882 | 1.2557 | 0.7113 | 9.793082e-07 | 204 |
| 0.0533 | 0.9976 | 1.2743 | 0.7113 | 9.791074e-07 | 205 |
| 0.0585 | 0.9882 | 1.2753 | 0.7113 | 9.789057e-07 | 206 |
| 0.0596 | 0.9929 | 1.2881 | 0.7042 | 9.787032e-07 | 207 |
| 0.0593 | 0.9953 | 1.2948 | 0.7042 | 9.784997e-07 | 208 |
| 0.0625 | 0.9859 | 1.2883 | 0.7042 | 9.782952e-07 | 209 |
| 0.0556 | 0.9929 | 1.2802 | 0.7113 | 9.780898e-07 | 210 |
| 0.0615 | 0.9812 | 1.2972 | 0.7113 | 9.778835e-07 | 211 |
| 0.0621 | 0.9859 | 1.3030 | 0.6972 | 9.776762e-07 | 212 |
| 0.0559 | 0.9882 | 1.2857 | 0.7183 | 9.774681e-07 | 213 |
| 0.0635 | 0.9859 | 1.3151 | 0.7042 | 9.772589e-07 | 214 |
| 0.0544 | 0.9882 | 1.2969 | 0.7113 | 9.770488e-07 | 215 |
| 0.0477 | 0.9976 | 1.2981 | 0.7113 | 9.768378e-07 | 216 |
| 0.0554 | 0.9882 | 1.3156 | 0.7113 | 9.766259e-07 | 217 |
| 0.0548 | 0.9906 | 1.3094 | 0.7113 | 9.76413e-07 | 218 |
| 0.0470 | 0.9976 | 1.3185 | 0.7042 | 9.761993e-07 | 219 |
| 0.0489 | 0.9953 | 1.3197 | 0.7042 | 9.759846e-07 | 220 |
| 0.0436 | 0.9976 | 1.3024 | 0.7113 | 9.757689e-07 | 221 |
| 0.0456 | 0.9953 | 1.3061 | 0.7113 | 9.755523e-07 | 222 |
| 0.0417 | 0.9976 | 1.3189 | 0.7042 | 9.753348e-07 | 223 |
| 0.0416 | 0.9953 | 1.3220 | 0.7042 | 9.751164e-07 | 224 |
| 0.0369 | 1.0 | 1.3211 | 0.7113 | 9.748971e-07 | 225 |
| 0.0570 | 0.9859 | 1.3274 | 0.7042 | 9.746768e-07 | 226 |
| 0.0416 | 0.9929 | 1.3409 | 0.6901 | 9.744556e-07 | 227 |
| 0.0314 | 1.0 | 1.3376 | 0.7042 | 9.742334e-07 | 228 |
| 0.0421 | 0.9929 | 1.3242 | 0.7183 | 9.740104e-07 | 229 |
| 0.0398 | 0.9976 | 1.3331 | 0.7042 | 9.737864e-07 | 230 |
| 0.0483 | 0.9882 | 1.3431 | 0.7042 | 9.735616e-07 | 231 |
| 0.0356 | 0.9953 | 1.3526 | 0.7042 | 9.733358e-07 | 232 |
| 0.0392 | 0.9953 | 1.3500 | 0.7042 | 9.731091e-07 | 233 |
| 0.0413 | 0.9953 | 1.3659 | 0.6972 | 9.728815e-07 | 234 |
| 0.0371 | 0.9929 | 1.3473 | 0.7042 | 9.726529e-07 | 235 |
| 0.0383 | 0.9929 | 1.3689 | 0.6972 | 9.724233e-07 | 236 |
| 0.0452 | 0.9953 | 1.3552 | 0.7042 | 9.721929e-07 | 237 |
| 0.0408 | 0.9953 | 1.3430 | 0.7113 | 9.719615e-07 | 238 |
| 0.0507 | 0.9906 | 1.3656 | 0.7042 | 9.717293e-07 | 239 |
| 0.0437 | 0.9953 | 1.3735 | 0.6972 | 9.714961e-07 | 240 |
| 0.0368 | 0.9929 | 1.3713 | 0.7113 | 9.71262e-07 | 241 |
| 0.0381 | 0.9976 | 1.3793 | 0.6972 | 9.71027e-07 | 242 |
| 0.0369 | 0.9953 | 1.3835 | 0.7113 | 9.707911e-07 | 243 |
| 0.0343 | 0.9976 | 1.3778 | 0.7183 | 9.705543e-07 | 244 |
| 0.0321 | 0.9929 | 1.3790 | 0.7113 | 9.703166e-07 | 245 |
| 0.0367 | 0.9953 | 1.3830 | 0.7113 | 9.70078e-07 | 246 |
| 0.0302 | 0.9953 | 1.3828 | 0.7113 | 9.698384e-07 | 247 |
| 0.0333 | 0.9929 | 1.3821 | 0.7113 | 9.69598e-07 | 248 |
| 0.0386 | 0.9929 | 1.3962 | 0.7113 | 9.693566e-07 | 249 |
| 0.0335 | 0.9929 | 1.4009 | 0.7113 | 9.691144e-07 | 250 |
| 0.0481 | 0.9835 | 1.3924 | 0.7113 | 9.688712e-07 | 251 |
| 0.0361 | 0.9953 | 1.3923 | 0.7113 | 9.686271e-07 | 252 |
| 0.0343 | 0.9906 | 1.4150 | 0.6972 | 9.683821e-07 | 253 |
| 0.0429 | 0.9906 | 1.3859 | 0.7254 | 9.681362e-07 | 254 |
| 0.0353 | 0.9906 | 1.4019 | 0.7113 | 9.678894e-07 | 255 |
| 0.0317 | 0.9929 | 1.4072 | 0.7113 | 9.676417e-07 | 256 |
| 0.0231 | 1.0 | 1.4038 | 0.7113 | 9.67393e-07 | 257 |
| 0.0240 | 1.0 | 1.4172 | 0.7183 | 9.671435e-07 | 258 |
| 0.0358 | 0.9882 | 1.4316 | 0.7042 | 9.66893e-07 | 259 |
| 0.0381 | 0.9906 | 1.4047 | 0.7254 | 9.666417e-07 | 260 |
| 0.0311 | 0.9929 | 1.4056 | 0.7113 | 9.663894e-07 | 261 |
| 0.0274 | 0.9976 | 1.4240 | 0.7113 | 9.661362e-07 | 262 |
| 0.0305 | 0.9976 | 1.4322 | 0.7113 | 9.658822e-07 | 263 |
| 0.0322 | 0.9929 | 1.4127 | 0.7183 | 9.656274e-07 | 264 |
| 0.0213 | 0.9976 | 1.4092 | 0.7254 | 9.653716e-07 | 265 |
### Framework versions
- Transformers 4.30.0.dev0
- TensorFlow 2.9.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Chun/w-zh2en-hsk
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: other
language:
- en
thumbnail:
tags:
- text generation
- conversational
inference: false
---
# Pygmalion 7B (q4_0 ggml version)
Converted from the XORed weights from [PygmalionAI](https://huggingface.co/PygmalionAI/pygmalion-7b) (i.e. ready for use).
Additionally, converted from bfloat16 to float16.
Additionally, converted from float16 to ggml float16.
Additionally, quantized from ggml float16 to ggml q4_0.
|
Chun/w-zh2en-mtm
|
[
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
license: other
language:
- en
thumbnail:
tags:
- text generation
- conversational
inference: false
---
# Pygmalion 7B (q4_1 ggml version)
Converted from the XORed weights from [PygmalionAI](https://huggingface.co/PygmalionAI/pygmalion-7b) (i.e. ready for use).
Additionally, converted from bfloat16 to float16.
Additionally, converted from float16 to ggml float16.
Additionally, quantized from ggml float16 to ggml q4_1.
|
Chun/w-zh2en-mto
|
[
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: other
language:
- en
thumbnail:
tags:
- text generation
- conversational
inference: false
---
# Pygmalion 7B (q5_0 ggml version)
Converted from the XORed weights from [PygmalionAI](https://huggingface.co/PygmalionAI/pygmalion-7b) (i.e. ready for use).
Additionally, converted from bfloat16 to float16.
Additionally, converted from float16 to ggml float16.
Additionally, quantized from ggml float16 to ggml q5_0.
|
Chungu424/DATA
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: other
language:
- en
thumbnail:
tags:
- text generation
- conversational
inference: false
---
# Pygmalion 7B (q5_1 ggml version)
Converted from the XORed weights from [PygmalionAI](https://huggingface.co/PygmalionAI/pygmalion-7b) (i.e. ready for use).
Additionally, converted from bfloat16 to float16.
Additionally, converted from float16 to ggml float16.
Additionally, quantized from ggml float16 to ggml q5_1.
|
Chungu424/qazwsx
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-05-11T22:40:20Z |
---
license: other
language:
- en
thumbnail:
tags:
- text generation
- conversational
inference: false
---
# Pygmalion 7B (q8_0 ggml version)
Converted from the XORed weights from [PygmalionAI](https://huggingface.co/PygmalionAI/pygmalion-7b) (i.e. ready for use).
Additionally, converted from bfloat16 to float16.
Additionally, converted from float16 to ggml float16.
Additionally, quantized from ggml float16 to ggml q8_0.
|
Chungu424/repodata
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
pipeline_tag: translation
---
The original model is at:
https://www.modelscope.cn/models/damo/nlp_csanmt_translation_en2zh/summary
## Usage
Option 1: use modelscope library
```
pip install modelscope subword_nmt sacremoses jieba
```
```python
from huggingface_hub import snapshot_download
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
model_dir = snapshot_download('modelscope-unofficial/damo-csanmt-en-zh-large')
pipe = pipeline(task=Tasks.translation, model=model_dir)
input_sequence = 'Elon Musk, co-founder and chief executive officer of Tesla Motors.'
outputs = pipe(input=input_sequence)
print(outputs['translation']) # '特斯拉汽车公司联合创始人兼首席执行官埃隆 · 马斯克。'
batch_input_sequences = [
"Elon Musk, co-founder and chief executive officer of Tesla Motors.",
"What's the weather like today?"
]
input_sequence = '<SENT_SPLIT>'.join(batch_input_sequences)
outputs = pipe(input=input_sequence)
print(outputs['translation'].split('<SENT_SPLIT>'))
```
|
Chuu/Chumar
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-05-11T22:52:00Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5751
- Accuracy: 0.0021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006058454513356471
- train_batch_size: 16
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2205 | 1.25 | 315 | 0.8209 | 0.0010 |
| 0.813 | 2.51 | 630 | 0.7684 | 0.0009 |
| 0.7645 | 3.76 | 945 | 0.7393 | 0.0008 |
| 0.7249 | 5.02 | 1260 | 0.6980 | 0.0007 |
| 0.6832 | 6.27 | 1575 | 0.6646 | 0.0003 |
| 0.6426 | 7.53 | 1890 | 0.6371 | 0.0019 |
| 0.6034 | 8.78 | 2205 | 0.6041 | 0.0020 |
| 0.564 | 10.04 | 2520 | 0.5897 | 0.0018 |
| 0.5253 | 11.29 | 2835 | 0.5857 | 0.0018 |
| 0.4961 | 12.55 | 3150 | 0.5771 | 0.0017 |
| 0.4752 | 13.8 | 3465 | 0.5751 | 0.0021 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Cinnamon/electra-small-japanese-discriminator
|
[
"pytorch",
"electra",
"pretraining",
"ja",
"transformers",
"license:apache-2.0"
] | null |
{
"architectures": [
"ElectraForPreTraining"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 419 | null |
---
license: apache-2.0
pipeline_tag: translation
---
The original model is at:
https://www.modelscope.cn/models/damo/nlp_csanmt_translation_en2zh_base/summary
## Usage
Option 1: use modelscope library
```
pip install modelscope subword_nmt sacremoses jieba
```
```python
from huggingface_hub import snapshot_download
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
model_dir = snapshot_download('modelscope-unofficial/damo-csanmt-en-zh-base')
pipe = pipeline(task=Tasks.translation, model=model_dir)
input_sequence = 'Elon Musk, co-founder and chief executive officer of Tesla Motors.'
outputs = pipe(input=input_sequence)
print(outputs['translation']) # 'Elon Musk,特斯拉汽车公司联合创始人兼首席执行官。'
batch_input_sequences = [
"Elon Musk, co-founder and chief executive officer of Tesla Motors.",
"What's the weather like today?"
]
input_sequence = '<SENT_SPLIT>'.join(batch_input_sequences)
outputs = pipe(input=input_sequence)
print(outputs['translation'].split('<SENT_SPLIT>'))
```
|
CleveGreen/FieldClassifier_v2_gpt
|
[
"pytorch",
"gpt2",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"GPT2ForSequenceClassification"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
metrics:
- rouge
- bleu
model-index:
- name: flan-t5-qg-tarek-test-SQUAD
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: squad
type: squad
config: plain_text
split: validation
args: plain_text
metrics:
- name: Rouge1
type: rouge
value: 51.7909
- name: Bleu
type: bleu
value:
bleu: 0.20591622940642762
precisions:
- 0.5204716701541998
- 0.2619394913622878
- 0.1654269529647818
- 0.10906716558470096
brevity_penalty: 0.9246257500068106
length_ratio: 0.927328752862794
translation_length: 111349
reference_length: 120075
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-qg-tarek-test-SQUAD
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6101
- Rouge1: 51.7909
- Rouge2: 29.105
- Rougel: 47.9342
- Rougelsum: 47.9206
- Gen Len: 14.3335
- Meteor: {'meteor': 0.46880070390780776}
- Bleu: {'bleu': 0.20591622940642762, 'precisions': [0.5204716701541998, 0.2619394913622878, 0.1654269529647818, 0.10906716558470096], 'brevity_penalty': 0.9246257500068106, 'length_ratio': 0.927328752862794, 'translation_length': 111349, 'reference_length': 120075}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Meteor | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 0.6576 | 1.0 | 10950 | 0.6301 | 50.2515 | 27.5194 | 46.5001 | 46.4993 | 14.0607 | {'meteor': 0.44978235923873977} | {'bleu': 0.19104342466719015, 'precisions': [0.5159747201089206, 0.2503337307531615, 0.15519111953679066, 0.10098320626550465], 'brevity_penalty': 0.9006709188377698, 'length_ratio': 0.9052925255048928, 'translation_length': 108703, 'reference_length': 120075} |
| 0.5315 | 2.0 | 21900 | 0.6101 | 51.7909 | 29.105 | 47.9342 | 47.9206 | 14.3335 | {'meteor': 0.46880070390780776} | {'bleu': 0.20591622940642762, 'precisions': [0.5204716701541998, 0.2619394913622878, 0.1654269529647818, 0.10906716558470096], 'brevity_penalty': 0.9246257500068106, 'length_ratio': 0.927328752862794, 'translation_length': 111349, 'reference_length': 120075} |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
CleveGreen/JobClassifier_v2
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 37 | null |
---
license: other
---
mostly non-mine
yodayno v2:
```
This license allows you to use the model, but only for non-commercial purposes. You cannot use the model or any part of it in a paid service or sell it.
If you use the model on any platform, you must provide a link or reference to the original model. You must give credit to the licensor whenever you use the model.
The licensor does not provide any warranty and is not liable for any damages caused by the use of the model.
If you break any of the terms, this license will be terminated.
This license is governed by the laws of the jurisdiction in which the licensor is located.
```
|
CodeDanCode/SP-KyleBot
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 15 | null |
---
license: apache-2.0
datasets:
- asset
- wi_locness
- GEM/wiki_auto_asset_turk
- discofuse
- zaemyung/IteraTeR_plus
- jfleg
language:
- en
metrics:
- sari
- bleu
- accuracy
---
# Model Card for CoEdIT-Large
This model was obtained by fine-tuning the corresponding `google/flan-t5-large` model on the CoEdIT dataset. Details of the dataset can be found in our paper and repository.
**Paper:** CoEdIT: Text Editing by Task-Specific Instruction Tuning
**Authors:** Vipul Raheja, Dhruv Kumar, Ryan Koo, Dongyeop Kang
## Model Details
### Model Description
- **Language(s) (NLP)**: English
- **Finetuned from model:** google/flan-t5-large
### Model Sources
- **Repository:** https://github.com/vipulraheja/coedit
- **Paper:** https://arxiv.org/abs/2305.09857
## How to use
We make available the models presented in our paper.
<table>
<tr>
<th>Model</th>
<th>Number of parameters</th>
</tr>
<tr>
<td>CoEdIT-large</td>
<td>770M</td>
</tr>
<tr>
<td>CoEdIT-xl</td>
<td>3B</td>
</tr>
<tr>
<td>CoEdIT-xxl</td>
<td>11B</td>
</tr>
</table>
## Uses
## Text Revision Task
Given an edit instruction and an original text, our model can generate the edited version of the text.<br>

## Usage
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("grammarly/coedit-large")
model = T5ForConditionalGeneration.from_pretrained("grammarly/coedit-large")
input_text = 'Fix grammatical errors in this sentence: When I grow up, I start to understand what he said is quite right.'
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_length=256)
edited_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
#### Software
https://github.com/vipulraheja/coedit
## Citation
**BibTeX:**
```
@article{raheja2023coedit,
title={CoEdIT: Text Editing by Task-Specific Instruction Tuning},
author={Vipul Raheja and Dhruv Kumar and Ryan Koo and Dongyeop Kang},
year={2023},
eprint={2305.09857},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
**APA:**
Raheja, V., Kumar, D., Koo, R., & Kang, D. (2023). CoEdIT: Text Editing by Task-Specific Instruction Tuning. ArXiv. /abs/2305.09857
|
CodeNinja1126/test-model
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 24 | null |
---
license: apache-2.0
datasets:
- asset
- wi_locness
- GEM/wiki_auto_asset_turk
- discofuse
- zaemyung/IteraTeR_plus
- jfleg
language:
- en
metrics:
- sari
- bleu
- accuracy
---
# Model Card for CoEdIT-xxl
This model was obtained by fine-tuning the corresponding google/flan-t5-xxl model on the CoEdIT dataset.
**Paper:** CoEdIT: ext Editing by Task-Specific Instruction Tuning
**Authors:** Vipul Raheja, Dhruv Kumar, Ryan Koo, Dongyeop Kang
## Model Details
### Model Description
- **Language(s) (NLP)**: English
- **Finetuned from model:** google/flan-t5-xxl
### Model Sources
- **Repository:** https://github.com/vipulraheja/coedit
- **Paper:** https://arxiv.org/abs/2305.09857
## How to use
We make available the models presented in our paper.
<table>
<tr>
<th>Model</th>
<th>Number of parameters</th>
</tr>
<tr>
<td>CoEdIT-large</td>
<td>770M</td>
</tr>
<tr>
<td>CoEdIT-xl</td>
<td>3B</td>
</tr>
<tr>
<td>CoEdIT-xxl</td>
<td>11B</td>
</tr>
</table>
## Uses
## Text Revision Task
Given an edit instruction and an original text, our model can generate the edited version of the text.<br>

## Usage
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("grammarly/coedit-xxl")
model = T5ForConditionalGeneration.from_pretrained("grammarly/coedit-xxl")
input_text = 'Fix grammatical errors in this sentence: When I grow up, I start to understand what he said is quite right.'
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_length=256)
edited_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
#### Software
https://github.com/vipulraheja/coedit
## Citation
**BibTeX:**
```
@article{raheja2023coedit,
title={CoEdIT: Text Editing by Task-Specific Instruction Tuning},
author={Vipul Raheja and Dhruv Kumar and Ryan Koo and Dongyeop Kang},
year={2023},
eprint={2305.09857},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
**APA:**
Raheja, V., Kumar, D., Koo, R., & Kang, D. (2023). CoEdIT: Text Editing by Task-Specific Instruction Tuning. ArXiv. /abs/2305.09857
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.