modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-30 12:27:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 528
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-30 12:27:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
CoreyMorris/ppo-rl-zoo-Pixelcopter-PLE-v0
|
CoreyMorris
| 2023-08-07T13:24:28Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"Pixelcopter-PLE-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-22T05:58:19Z |
---
library_name: stable-baselines3
tags:
- Pixelcopter-PLE-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 440.40 +/- 169.58
name: mean_reward
verified: false
---
# **PPO** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **PPO** agent playing **Pixelcopter-PLE-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
StofEzz/mascir_fr_wav2vec_version1000
|
StofEzz
| 2023-08-07T13:22:47Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-31T07:58:52Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: mascir_fr_wav2vec_version1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mascir_fr_wav2vec_version1000
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4677
- Wer: 0.37
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.3896 | 4.0 | 500 | 3.0842 | 1.0 |
| 1.6969 | 8.0 | 1000 | 0.6327 | 0.5956 |
| 0.3056 | 12.0 | 1500 | 0.5024 | 0.49 |
| 0.1441 | 16.0 | 2000 | 0.5241 | 0.45 |
| 0.091 | 20.0 | 2500 | 0.4997 | 0.44 |
| 0.0676 | 24.0 | 3000 | 0.5173 | 0.4456 |
| 0.0603 | 28.0 | 3500 | 0.4487 | 0.4122 |
| 0.0378 | 32.0 | 4000 | 0.4554 | 0.3933 |
| 0.0328 | 36.0 | 4500 | 0.4395 | 0.3822 |
| 0.0275 | 40.0 | 5000 | 0.4910 | 0.3889 |
| 0.0198 | 44.0 | 5500 | 0.4861 | 0.3722 |
| 0.019 | 48.0 | 6000 | 0.4677 | 0.37 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Aspik101/WizardVicuna-Uncensored-3B-instruct-PL-lora_unload
|
Aspik101
| 2023-08-07T13:19:39Z | 1,481 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-07T13:12:42Z |
---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
|
Aspik101/WizardVicuna-Uncensored-3B-instruct-PL-lora_GGML
|
Aspik101
| 2023-08-07T13:12:42Z | 0 | 4 | null |
[
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"region:us"
] |
text-generation
| 2023-08-07T13:09:18Z |
---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
|
slone/mbart-large-51-mul-myv-v1
|
slone
| 2023-08-07T13:11:52Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"erzya",
"mordovian",
"translation",
"myv",
"ru",
"fi",
"de",
"es",
"en",
"hi",
"zh",
"tr",
"uk",
"fr",
"ar",
"dataset:slone/myv_ru_2022",
"dataset:yhavinga/ccmatrix",
"arxiv:2209.09368",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-09-15T06:13:29Z |
---
language:
- myv
- ru
- fi
- de
- es
- en
- hi
- zh
- tr
- uk
- fr
- ar
tags:
- erzya
- mordovian
- translation
license: cc-by-sa-4.0
datasets:
- slone/myv_ru_2022
- yhavinga/ccmatrix
---
This a model to translate texts from the Erzya language (`myv`, cyrillic script) to 11 other languages: `ru,fi,de,es,en,hi,zh,tr,uk,fr,ar`. See its [demo](https://huggingface.co/spaces/slone/myv-translation-2022-demo)!
It is described in the paper [The first neural machine translation system for the Erzya language](https://arxiv.org/abs/2209.09368).
This model is based on [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50), but with updated vocabulary and checkpoint:
- Added an extra language token `myv_XX` and 19K new BPE tokens for the Erzya language;
- Fine-tuned to translate to Erzya: first from Russian, then from all 11 languages.
The following code can be used to run translation using the model:
```Python
from transformers import MBartForConditionalGeneration, MBart50Tokenizer
def fix_tokenizer(tokenizer):
""" Add a new language token to the tokenizer vocabulary (this should be done each time after its initialization) """
old_len = len(tokenizer) - int('myv_XX' in tokenizer.added_tokens_encoder)
tokenizer.lang_code_to_id['myv_XX'] = old_len-1
tokenizer.id_to_lang_code[old_len-1] = 'myv_XX'
tokenizer.fairseq_tokens_to_ids["<mask>"] = len(tokenizer.sp_model) + len(tokenizer.lang_code_to_id) + tokenizer.fairseq_offset
tokenizer.fairseq_tokens_to_ids.update(tokenizer.lang_code_to_id)
tokenizer.fairseq_ids_to_tokens = {v: k for k, v in tokenizer.fairseq_tokens_to_ids.items()}
if 'myv_XX' not in tokenizer._additional_special_tokens:
tokenizer._additional_special_tokens.append('myv_XX')
tokenizer.added_tokens_encoder = {}
def translate(text, model, tokenizer, src='ru_RU', trg='myv_XX', max_length='auto', num_beams=3, repetition_penalty=5.0, train_mode=False, n_out=None, **kwargs):
tokenizer.src_lang = src
encoded = tokenizer(text, return_tensors="pt", truncation=True, max_length=1024)
if max_length == 'auto':
max_length = int(32 + 1.5 * encoded.input_ids.shape[1])
if train_mode:
model.train()
else:
model.eval()
generated_tokens = model.generate(
**encoded.to(model.device),
forced_bos_token_id=tokenizer.lang_code_to_id[trg],
max_length=max_length,
num_beams=num_beams,
repetition_penalty=repetition_penalty,
num_return_sequences=n_out or 1,
**kwargs
)
out = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
if isinstance(text, str) and n_out is None:
return out[0]
return out
mname = 'slone/mbart-large-51-mul-myv-v1'
model = MBartForConditionalGeneration.from_pretrained(mname)
tokenizer = MBart50Tokenizer.from_pretrained(mname)
fix_tokenizer(tokenizer)
print(translate('Привет, собака!', model, tokenizer, src='ru_RU', trg='myv_XX'))
# Шумбрат, киска! # действительно, по-эрзянски собака именно так
print(translate('Hello, doggy!', model, tokenizer, src='en_XX', trg='myv_XX'))
# Шумбрат, киска!
```
|
Aspik101/WizardVicuna-Uncensored-3B-instruct-PL-lora_GPTQ
|
Aspik101
| 2023-08-07T13:09:17Z | 5 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-07T13:07:32Z |
---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
|
TheRains/yt-special-batch4-2lr5-small
|
TheRains
| 2023-08-07T13:07:15Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:yt",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-07T05:30:15Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- whisper-event
- generated_from_trainer
datasets:
- yt
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: yt id
type: yt
metrics:
- name: Wer
type: wer
value: 51.26775176707088
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the yt id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7838
- Wer: 51.2678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 1.134 | 0.09 | 1000 | 1.0407 | 97.9768 |
| 0.8923 | 0.17 | 2000 | 0.9185 | 89.0539 |
| 0.9713 | 0.26 | 3000 | 0.8536 | 58.9132 |
| 0.7834 | 0.34 | 4000 | 0.7838 | 51.2678 |
| 0.78 | 0.43 | 5000 | 0.7438 | 52.1951 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mfrashad/hassakuv13
|
mfrashad
| 2023-08-07T13:01:36Z | 18 | 0 |
diffusers
|
[
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-07T12:56:36Z |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
duplicated_from: stablediffusionapi/hassaku-hentai-model
---
|
Yntec/CartoonStyleClassic
|
Yntec
| 2023-08-07T13:00:22Z | 367 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Zovya",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-06T08:16:22Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- Zovya
---
# Cartoon Style Classic.
Preview and prompt:

pretty cute little girl in tricycle, Screenshot of an surreal jean 70s round minimalist architecture, Sharp light, 35mm still from a sci fi blockbuster color movie made in 2022, beautiful portrait, Dorothy, set in 1860, in front of a spaceship that has just landed on an alien planet, are all wearing, a robot stands nearby
This model with the kl-f8-anime2 VAE baked in.
Original pages:
https://civitai.com/models/33030/cartoon-style-classic
|
Q93WnX4FUHx2mJ/e5-multi-large-sbert
|
Q93WnX4FUHx2mJ
| 2023-08-07T12:59:15Z | 17 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-07T12:34:44Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Q93WnX4FUHx2mJ/e5-multi-base-sbert
|
Q93WnX4FUHx2mJ
| 2023-08-07T12:45:53Z | 19 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-07T12:34:59Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
heegyu/AULM-5.8b-v0804-hf
|
heegyu
| 2023-08-07T12:41:55Z | 2,238 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"polyglot-ko-5.8b",
"ko",
"dataset:changpt/ko-lima-vicuna",
"dataset:HAERAE-HUB/KoInstruct-QA",
"dataset:nlpai-lab/kullm-v2",
"dataset:heegyu/korquad-chat-v1",
"dataset:KETI-AIR/kowow",
"dataset:NomaDamas/Ko-StrategyQA",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-07T11:21:14Z |
---
license: apache-2.0
language:
- ko
datasets:
- changpt/ko-lima-vicuna
- HAERAE-HUB/KoInstruct-QA
- nlpai-lab/kullm-v2
- heegyu/korquad-chat-v1
- KETI-AIR/kowow
- NomaDamas/Ko-StrategyQA
tags:
- polyglot-ko-5.8b
---
[EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b)를 공개 한국어 instruction 데이터로 3에폭 학습했습니다.
### 활용 데이터 모음
- [evol-instruct](https://github.com/lcw99/evolve-instruct/)
- [koalpaca v1.1](https://raw.githubusercontent.com/Beomi/KoAlpaca/main/KoAlpaca_v1.1.jsonl)
- [kullm v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) - dolly, gpt4all 2개만
- [changpt/ko-lima-vicuna](https://huggingface.co/datasets/changpt/ko-lima-vicuna)
- [HAERAE-HUB/KoInstruct-QA](https://huggingface.co/datasets/HAERAE-HUB/KoInstruct-QA)
- [heegyu/korquad-chat-v1](https://huggingface.co/datasets/heegyu/korquad-chat-v1)
- [AIRC-KETI/kowow](https://github.com/AIRC-KETI/kowow)
- [NomaDamas/Ko-StrategyQA](https://huggingface.co/datasets/PCEO-AI-CLUB/Ko-StrategyQA)
데이터 불러오기 & 병합 코드: https://github.com/HeegyuKim/open-korean-instructions/blob/main/parse.py
### 기본 프롬프트
뉴스 출처: https://n.news.naver.com/mnews/article/437/0000354456?sid=101
```
### Human:
글이 너무 길어요, 내용을 요약해주세요
### Input:
[앵커]
이렇게 더울 때 아이스크림 많이 찾게 되는데, 가격표 보면 손이 쉽게 가지 않습니다. 1년 전보다 10% 넘게 올랐습니다. 정부 요청으로 라면이나 과자 가격은 내려갔는데, 아이스크림은 꿈쩍 않고 있습니다.
이상화 기자가 취재했습니다.
[기자]
찜통 더위에 아이스크림 코너를 찾았다가 부쩍 오른 가격에 놀랐다는 소비자들이 적지 않습니다.
[조승건/서울 상암동 : 좀 합리적으로 올렸으면 좋겠어요. 올리자 해서 올려버리고…]
[이영빈/서울 목동 : (아이스크림) 콘 종류가 가격이 많이 오른 것 같아서 편의점에서 사기가 좀 부담스러워지는…]
지난달 기준으로 아이스크림 값은 1년전에 비해 10.7% 올랐습니다.
평균 물가상승률보다 5배 가량 높습니다.
올 들어 1000원짜리 아이스크림은 1200원으로 올랐습니다.
그런데 지난달 한 대형 아이스크림 업체가 또다시 편의점 공급가격을 올리기로 했습니다.
정부가 인하 요청을 한 뒤 몇몇 라면과 빵, 과자 가격이 내려간 것과 반대 흐름입니다.
[아이스크림 업체 관계자 : 원가 부담이 되기 때문에 가격 인하를 하지 않은 거고요.]
편의점들은 일단 인상분을 감수하고 소비자가격을 안 올리기로 했지만, 조만간 반영할 것으로 보입니다.
10월부턴 우윳값이 오르기 때문에 우유를 원료로 쓰는 아이스크림은 더 오를 수 있습니다.
여기에 다른 가공식품과 외식가격까지 뛰다 보니 소비자들은 2%대로 둔화된 물가를 체감하기 어렵습니다.
[허은미/경기 고양시 덕은동 : 저는 아이가 있으니까 라면이나 음료수나 우유, 고기 위주로 사게 되는데 가격 차이를 딱히 못 느끼겠어요.]
실제 장기 물가 흐름을 가늠하기 위해 변동폭이 큰 농산물과 석유류를 빼고 매기는 '근원물가'는 올 들어 4.5% 올랐습니다.
외환위기가 있던 1998년 6.8%를 기록한 이후 25년 만에 가장 높은 수치인데, 가공식품과 외식비, 공공요금 등이 오른 영향입니다.
(영상디자인 : 이정회)
```
생성 결과: ```이 기사에서는 뜨거운 날씨에 아이스크림이 많이 찾는 여름에 아이스크림 값이 인상된 것에 대해 소비자들의 불만이 높으며 아이스크림 가격이 비싼 원인과 가격 결정의 문제를 분석하였습니다. 또한, 다른 가공식품과 외식 가격 상승, 인플레이션으로 인한 체감 물가 상승 등에 대해 설명하였고, 정부가 가공식품과외식 가격인하 요청 이후 몇몇 라면과 빵, 과자 가격이 인하된 것과 대조적으로 아이스크림 가격이 지속적으로 인상된 것에 대해 언급하고 있습니다.```
### 다른 예시 프롬프트
```
당신은 한국어 챗봇 아우름입니다. 다음과 같은 조건을 만족하여 사용자에게 친절한 대답을 해야합니다.
1. 자연스러운 대화: 사용자와 자연스럽고 유창한 대화를 나누도록 노력해주세요.
2. 다양한 주제에 대한 지식: 다양한 분야의 지식을 포함하여 사용자의 질문에 정확하고 포괄적으로 답변할 수 있도록 학습시켜주세요.
3. 정확한 정보 제공: 사용자가 요청한 정보에 대해 정확하고 신뢰할 수 있는 답변을 제공하도록 해주세요.
4. 사용자 의도 파악 능력: 사용자의 의도와 요구를 이해하고 그에 맞게 대응할 수 있는 능력을 갖추도록 개발해주세요.
5. 적절한 감정 인식: 사용자의 감정을 파악하고 적절한 감정 표현과 대화 방식을 선택하여 상호작용할 수 있도록 지도해주세요.
### Human:
안녕! 반가워요,
### Assistant:
안녕하세요. 저는 아주대학교의 한국어 챗봇 아우름입니다.
당신이 궁금하거나, 필요로 하는 정보에 대해서 최선을 다해 답변해드릴게요.<|endoftext|>
### Human:
만약 내일 지구가 멸망한다면 어떡하지??
### Assistant:
```
생성 결과:
```안녕하세요! 아주대학교의 한국어 챗봇 아우름입니다. 만약 내일 지구가 멸망한다면, 여러 가지 고민이 생기겠지만, 가장 먼저 해야할 일은 최선의 선택을 해야합니다. 가장 합리적인 선택은, 지구가 멸망하기 전에 사랑하는 사람들과 함께하는 시간을 보내거나, 더 많이 웃고, 즐거운 활동을 하며, 다음 세상을 위한 준비를 하는 것이라 생각됩니다. 여러분의 시간을 더 소중하게 사용하시기 바랍니다.```
|
tilyupo/t5-small-trivia-gpu-ca2q
|
tilyupo
| 2023-08-07T12:39:23Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-06T16:20:10Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_keras_callback
model-index:
- name: t5-small-trivia-gpu-ca2q
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-small-trivia-gpu-ca2q
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2675
- Validation Loss: 1.3898
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adafactor', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.00014285714, 'beta_2_decay': -0.8, 'epsilon_1': 1e-30, 'epsilon_2': 0.001, 'clip_threshold': 1.0, 'relative_step': False}
- training_precision: mixed_bfloat16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.7429 | 1.4649 | 0 |
| 1.4976 | 1.4196 | 1 |
| 1.3663 | 1.3913 | 2 |
| 1.2675 | 1.3898 | 3 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.13.0
- Datasets 2.14.3
- Tokenizers 0.13.3
|
bvantuan/camembert-mwer
|
bvantuan
| 2023-08-07T12:35:04Z | 111 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"camembert",
"token-classification",
"mwe",
"fr",
"dataset:Sequoia",
"arxiv:1911.03894",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-04T09:42:32Z |
---
language: fr
license: mit
datasets:
- Sequoia
widget:
- text: Aucun financement politique occulte n'a pu être mis en évidence.
- text: L'excrétion de l'acide zolédronique dans le lait maternel n'est pas connue.
pipeline_tag: token-classification
tags:
- mwe
---
# Multiword expressions recognition.
A multiword expression (MWE) is a combination of words which exhibits lexical, morphosyntactic, semantic, pragmatic and/or statistical idiosyncrasies (Baldwin and Kim, 2010). The objective of Multiword Expression Recognition (MWER) is to automate the identification of these MWEs.
## Model description
`camembert-mwer` is a model that was fine-tuned from [CamemBERT](https://huggingface.co/camembert/camembert-large) as a token classification task specifically on the [Sequoia](http://deep-sequoia.inria.fr/) dataset for the MWER task.
## How to use
You can use this model directly with a pipeline for token classification:
```python
>>> from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("bvantuan/camembert-mwer")
>>> model = AutoModelForTokenClassification.from_pretrained("bvantuan/camembert-mwer")
>>> mwe_classifier = pipeline('token-classification', model=model, tokenizer=tokenizer)
>>> sentence = "Pour ce premier rendez-vous, l'animateur a pu faire partager sa passion et présenter quelques oeuvres pour mettre en bouche les participants."
>>> mwes = mwe_classifier(sentence)
[{'entity': 'B-MWE',
'score': 0.99492574,
'index': 4,
'word': '▁rendez',
'start': 15,
'end': 22},
{'entity': 'I-MWE',
'score': 0.9344883,
'index': 5,
'word': '-',
'start': 22,
'end': 23},
{'entity': 'I-MWE',
'score': 0.99398583,
'index': 6,
'word': 'vous',
'start': 23,
'end': 27},
{'entity': 'B-VID',
'score': 0.9827843,
'index': 22,
'word': '▁mettre',
'start': 106,
'end': 113},
{'entity': 'I-VID',
'score': 0.9835186,
'index': 23,
'word': '▁en',
'start': 113,
'end': 116},
{'entity': 'I-VID',
'score': 0.98324823,
'index': 24,
'word': '▁bouche',
'start': 116,
'end': 123}]
>>> mwe_classifier.group_entities(mwes)
[{'entity_group': 'MWE',
'score': 0.9744666,
'word': 'rendez-vous',
'start': 15,
'end': 27},
{'entity_group': 'VID',
'score': 0.9831837,
'word': 'mettre en bouche',
'start': 106,
'end': 123}]
```
## Training data
The Sequoia dataset is divided into train/dev/test sets:
| | Sequoia | train | dev | test |
| :----: | :---: | :----: | :---: | :----: |
| #sentences | 3099 | 1955 | 273 | 871 |
| #MWEs | 3450 | 2170 | 306 | 974 |
| #Unseen MWEs | _ | _ | 100 | 300 |
This dataset has 6 distinct categories:
* MWE: Non-verbal MWEs (e.g. **à peu près**)
* IRV: Inherently reflexive verb (e.g. **s'occuper**)
* LVC.cause: Causative light-verb construction (e.g. **causer** le **bouleversement**)
* LVC.full: Light-verb construction (e.g. **avoir pour but** de )
* MVC: Multi-verb construction (e.g. **faire remarquer**)
* VID: Verbal idiom (e.g. **voir le jour**)
## Training procedure
### Preprocessing
The employed sequential labeling scheme for this task is the Inside–outside–beginning (IOB2) methodology.
### Pretraining
The model was trained on train+dev sets with learning rate $3 × 10^{-5}$, batch size 10 and over the course of 15 epochs.
### Evaluation results
On the test set, this model achieves the following results:
<table>
<tr>
<td colspan="3">Global MWE-based</td>
<td colspan="3">Unseen MWE-based</td>
</tr>
<tr>
<td>Precision</td><td>Recall</td><td>F1</td>
<td>Precision</td><td>Recall</td><td>F1</td>
</tr>
<tr>
<td>83.78</td><td>83.78</td><td>83.78</td>
<td>57.05</td><td>60.67</td><td>58.80</td>
</tr>
</table>
### BibTeX entry and citation info
```bibtex
@article{martin2019camembert,
title={CamemBERT: a tasty French language model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de La Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
journal={arXiv preprint arXiv:1911.03894},
year={2019}
}
@article{candito2020french,
title={A French corpus annotated for multiword expressions and named entities},
author={Candito, Marie and Constant, Mathieu and Ramisch, Carlos and Savary, Agata and Guillaume, Bruno and Parmentier, Yannick and Cordeiro, Silvio Ricardo},
journal={Journal of Language Modelling},
volume={8},
number={2},
year={2020},
publisher={Polska Akademia Nauk. Instytut Podstaw Informatyki PAN}
}
```
|
Yntec/photoMovieX
|
Yntec
| 2023-08-07T12:33:44Z | 9,216 | 6 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"MagicArt35",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-05T09:53:28Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- MagicArt35
---
# Photo Movie X
Preview and prompt:

pretty cute little girl in tricycle, Screenshot of an surreal jean 70s round minimalist architecture, Sharp light, 35mm still from a sci fi blockbuster color movie made in 2022, beautiful portrait, Dorothy, set in 1860, in front of a spaceship that has just landed on an alien planet, are all wearing, a robot stands nearby
Original page:
https://civitai.com/models/94687?modelVersionId=101000
|
KallistiTMR/llama-2-7b-chat-wiz-k16-13
|
KallistiTMR
| 2023-08-07T12:28:27Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T03:44:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
|
harshV27/Falcon-7b-chat-adapters
|
harshV27
| 2023-08-07T12:26:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-07T11:21:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
FatemahAlsubaiei/AraELECTRA-CGSQuAD-QA-Model2
|
FatemahAlsubaiei
| 2023-08-07T12:11:35Z | 236 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"question-answering",
"ar",
"dataset:FatemahAlsubaiei/CGSQuAD",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-07T11:52:43Z |
---
datasets:
- FatemahAlsubaiei/CGSQuAD
language:
- ar
metrics:
- f1
- exact_match
library_name: transformers
pipeline_tag: question-answering
---
|
jakobkruse/ppo-Huggy
|
jakobkruse
| 2023-08-07T12:07:54Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-07T12:07:49Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jakobkruse/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gubartz/st_minilm_pmc_sents_full
|
gubartz
| 2023-08-07T12:04:25Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-07T12:04:06Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4328 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchAllTripletLoss.BatchAllTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 8656,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
AlexDr/model1
|
AlexDr
| 2023-08-07T11:58:45Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"legal",
"climate",
"uk",
"dataset:Anthropic/hh-rlhf",
"dataset:Open-Orca/OpenOrca",
"license:apache-2.0",
"region:us"
] | null | 2023-08-07T11:55:48Z |
---
license: apache-2.0
datasets:
- Anthropic/hh-rlhf
- Open-Orca/OpenOrca
language:
- uk
metrics:
- bertscore
- accuracy
library_name: adapter-transformers
tags:
- legal
- climate
---
|
dmitrijsk/Bloomz_marketing_tutorial
|
dmitrijsk
| 2023-08-07T11:39:13Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-07T11:39:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
mrmegatelo/ppo-LunarLander-v2
|
mrmegatelo
| 2023-08-07T11:38:10Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T11:37:54Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 292.16 +/- 18.68
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BabaYaga048/MC_Reinforce
|
BabaYaga048
| 2023-08-07T11:24:01Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T11:23:50Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: MC_Reinforce
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Evan-Lin/Bart-abs-yelp-allure-rouge
|
Evan-Lin
| 2023-08-07T11:18:31Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-08-07T04:58:35Z |
---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpv_lsdew_/Evan-Lin/Bart-abs-yelp-allure-rouge")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpv_lsdew_/Evan-Lin/Bart-abs-yelp-allure-rouge")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpv_lsdew_/Evan-Lin/Bart-abs-yelp-allure-rouge")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
IlyaGusev/saiga2_13b_lora
|
IlyaGusev
| 2023-08-07T11:16:17Z | 0 | 29 | null |
[
"conversational",
"ru",
"dataset:IlyaGusev/ru_turbo_alpaca",
"dataset:IlyaGusev/ru_turbo_saiga",
"dataset:IlyaGusev/ru_sharegpt_cleaned",
"dataset:IlyaGusev/oasst1_ru_main_branch",
"dataset:IlyaGusev/ru_turbo_alpaca_evol_instruct",
"dataset:lksy/ru_instruct_gpt4",
"license:cc-by-4.0",
"region:us"
] |
text-generation
| 2023-07-24T20:59:55Z |
---
datasets:
- IlyaGusev/ru_turbo_alpaca
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/oasst1_ru_main_branch
- IlyaGusev/ru_turbo_alpaca_evol_instruct
- lksy/ru_instruct_gpt4
language:
- ru
pipeline_tag: conversational
license: cc-by-4.0
---
# Saiga2 7B, Russian LLaMA2-based chatbot
Based on [LLaMA-2 13B HF](https://huggingface.co/meta-llama/Llama-2-13b-hf).
This is an adapter-only version.
Colab: [link](https://colab.research.google.com/drive/16xn7Zi2jli_9iPuKeM9wx53wkouzBQIf)
Llama.cpp version: [link](https://huggingface.co/IlyaGusev/saiga2_13b_ggml)
Training code: [link](https://github.com/IlyaGusev/rulm/tree/master/self_instruct)
**WARNING 1**: Run with the development version of `transformers` and `peft`!
**WARNING 2**: Avoid using V100 (in Colab, for example). Outputs are much worse in this case.
**WARNING 3**: You can use the [Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) base model instead.
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
MODEL_NAME = "IlyaGusev/saiga2_13b_lora"
DEFAULT_MESSAGE_TEMPLATE = "<s>{role}\n{content}</s>\n"
DEFAULT_SYSTEM_PROMPT = "Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им."
class Conversation:
def __init__(
self,
message_template=DEFAULT_MESSAGE_TEMPLATE,
system_prompt=DEFAULT_SYSTEM_PROMPT,
start_token_id=1,
bot_token_id=9225
):
self.message_template = message_template
self.start_token_id = start_token_id
self.bot_token_id = bot_token_id
self.messages = [{
"role": "system",
"content": system_prompt
}]
def get_start_token_id(self):
return self.start_token_id
def get_bot_token_id(self):
return self.bot_token_id
def add_user_message(self, message):
self.messages.append({
"role": "user",
"content": message
})
def add_bot_message(self, message):
self.messages.append({
"role": "bot",
"content": message
})
def get_prompt(self, tokenizer):
final_text = ""
for message in self.messages:
message_text = self.message_template.format(**message)
final_text += message_text
final_text += tokenizer.decode([self.start_token_id, self.bot_token_id])
return final_text.strip()
def generate(model, tokenizer, prompt, generation_config):
data = tokenizer(prompt, return_tensors="pt")
data = {k: v.to(model.device) for k, v in data.items()}
output_ids = model.generate(
**data,
generation_config=generation_config
)[0]
output_ids = output_ids[len(data["input_ids"][0]):]
output = tokenizer.decode(output_ids, skip_special_tokens=True)
return output.strip()
config = PeftConfig.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto"
)
model = PeftModel.from_pretrained(
model,
MODEL_NAME,
torch_dtype=torch.float16
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=False)
generation_config = GenerationConfig.from_pretrained(MODEL_NAME)
print(generation_config)
inputs = ["Почему трава зеленая?", "Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч"]
for inp in inputs:
conversation = Conversation()
conversation.add_user_message(inp)
prompt = conversation.get_prompt(tokenizer)
output = generate(model, tokenizer, prompt, generation_config)
print(inp)
print(output)
print()
print("==============================")
print()
```
Examples:
```
User: Почему трава зеленая?
Saiga: Зеленый цвет растений обусловлен наличием пигмента хлорофилла в их листьях. Хлорофилл содержит железо, которое способно поглощать световой волны красного, оранжевого и желтого цвета, а отражает световые волны зеленого цвета. Таким образом, зеленый цвет является результатом процесса фотосинтеза, который происходит в листьях растений.
```
```
User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч
Saiga: Однажды в парке на улице Ленина произошло невероятное событие. В тот день, когда все жители города были заняты своими делами, Таня решила отправиться на прогулку. Она была очень веселой и радостной, потому что еще накануне она получила отличные оценки за успешную защиту курсовой работы.
Перед тем как выйти из дома, Тане пришла идея взять с собой мяч для игры в футбол. Она знала, что в парке есть хорошая площадка для этого вида спорта, поэтому решила воспользоваться этим прекрасным временем.
Когда Таня подошла к парку, она заметила множество людей, которые играли в различные виды спорта. Некоторые стояли у столов для настольного тенниса, другие – на кортах для баскетбола или волейбола. Но ни один из них не был занят футболом.
Так как Тане было очень хотелось сыграть в футбол, она решила начать самостоятельную игру. Она бросила мяч в воздух и стала бегать за ним, пытаясь его достать. Однако, каждый раз, когда она подбиралась к мячу, он быстро летел далеко от нее.
Вскоре к Тане подошел молодой парень, который также играл в футбол. Он предложил ей стать её партнером и помочь ей в игре. Тане была рада этому предложению, так как она не могла играть одна.
Игра продолжалась уже несколько часов. Молодой парень и Таня перемещались по всей площадке, стараясь забить голы друг другу. Их команды менялись каждый раз, когда они забивали мяч в противоположные ворота.
Но внезапно, когда Тане думала, что игра уже закончена, она заметила, что мяч исчез. Она и ее партнер начали искать его по всей площадке, но никаких следов не находили.
Наконец, после долгого поисков, Тане заметила, что мяч находится в глубине кустов. Она и ее партнер решили проникнуть внутрь, чтобы найти его.
Через некоторое время они нашли мяч, который был спрятан в густом кусте. Когда они выбрались из него, они обнаружили, что их путь ведет к большому дереву.
Дерево было очень высоким и красивым, и Тане сразу же поняла, что это было место, где она должна остаться. Она решила, что будет здесь играть в футбол каждую субботу вместе со своим новым партнером.
Так произошло, что благодаря этой случайной встрече, Тане нашла своего нового друга и новое увлечение. С тех пор она проводила каждую субботу в этом парке, играя в футбол и наслаждаясь прекрасными видами.
```
v1:
- dataset code revision 7712a061d993f61c49b1e2d992e893c48acb3a87
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/848s9kbi)
- 7 datasets: ru_turbo_alpaca, ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_turbo_alpaca_evol_instruct (iteration 1/2), ru_instruct_gpt4
- Datasets merging script: [create_chat_set.py](https://github.com/IlyaGusev/rulm/blob/e4238fd9a196405b566a2d5838ab44b7a0f4dc31/self_instruct/src/data_processing/create_chat_set.py)
- saiga2_13b vs saiga13b: 79-2-95
|
Liea/q-FrozenLake-v1-4x4-noSlippery
|
Liea
| 2023-08-07T11:11:31Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T11:11:27Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="Liea/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
prantik-s/realistic_vision
|
prantik-s
| 2023-08-07T10:42:05Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-07T10:34:55Z |
---
license: creativeml-openrail-m
---
|
AtilliO/chopper_05
|
AtilliO
| 2023-08-07T10:37:53Z | 13 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Heli",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Heli",
"region:us"
] |
reinforcement-learning
| 2023-08-07T10:37:44Z |
---
library_name: ml-agents
tags:
- Heli
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Heli
---
# **ppo** Agent playing **Heli**
This is a trained model of a **ppo** agent playing **Heli**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AtilliO/chopper_05
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
yoonlee/model
|
yoonlee
| 2023-08-07T10:35:26Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-07T08:36:24Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - yoonlee/model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks cat using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
taehoon1lee/Reinforce-unit4
|
taehoon1lee
| 2023-08-07T10:32:41Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T10:32:31Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-unit4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
akdeniz27/rl_course_vizdoom_health_gathering_supreme
|
akdeniz27
| 2023-08-07T10:31:04Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T10:30:55Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.96 +/- 4.84
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r akdeniz27/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
gioca91/Reinforce-Pixelcopter-PLE-v0
|
gioca91
| 2023-08-07T10:16:07Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T10:16:02Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 37.50 +/- 28.16
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Lilsunx/falconsun
|
Lilsunx
| 2023-08-07T10:15:07Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-07T10:14:18Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
hihellooo2024/opt-350m-lora-1024
|
hihellooo2024
| 2023-08-07T09:56:33Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-07T09:56:32Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
TheTravellingEngineer/bloom-560m-RLHF-v2
|
TheTravellingEngineer
| 2023-08-07T09:53:22Z | 1,601 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-07T09:50:42Z |
The base model is bigscience/bloom-560m. It was finetuned using RLHF and the dataset and the model prompt is similar to the original model.
This repo contains the merged fp16 model.
**Legal Disclaimer: This model is bound by the usage restrictions of the original BLOOM model. And comes with no warranty or gurantees of any kind.**
---
- license:
- bigscience-bloom-rail-1.0 <br>
- datasets:
- Anthropic/hh-rlhf <br>
- language:
- en <br>
- reference: https://github.com/hiyouga/LLaMA-Efficient-Tuning/tree/main
---
|
abhibarman/bloomz-560m_PROMPT_TUNING_CAUSAL_LM
|
abhibarman
| 2023-08-07T09:40:49Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-07T09:40:48Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
housearch/Llama2_Traditional_Chinese_13b_Chat
|
housearch
| 2023-08-07T09:33:59Z | 16 | 19 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-06T01:05:43Z |
---
license: openrail
language:
- zh
---
支援繁體中文的Llama2 13b 來了!分享給大家!見學館助推台灣AI產業的發展,將會陸續釋出最新結果。
|
Bryan5/ppo-LunarLander-v2
|
Bryan5
| 2023-08-07T09:18:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T09:18:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.47 +/- 22.84
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bavest/fin-llama-33b
|
bavest
| 2023-08-07T09:16:36Z | 0 | 6 | null |
[
"safetensors",
"finance",
"llm",
"llama",
"trading",
"dataset:bavest/fin-llama-dataset",
"license:gpl",
"region:us"
] | null | 2023-06-02T21:34:26Z |
---
license: gpl
datasets:
- bavest/fin-llama-dataset
tags:
- finance
- llm
- llama
- trading
---
# FIN-LLAMA
> Efficient Finetuning of Quantized LLMs for Finance
[Adapter Weights](https://huggingface.co/bavest/fin-llama)
| [Dataset](https://huggingface.co/datasets/bavest/fin-llama-dataset)
## Installation
To load models in 4bits with transformers and bitsandbytes, you have to install accelerate and transformers from source
and make sure you have the latest version of the bitsandbytes library (0.39.0).
```bash
pip3 install -r requirements.txt
pip3 install -q -U bitsandbytes
pip3 install -q -U git+https://github.com/huggingface/transformers.git
pip3 install -q -U git+https://github.com/huggingface/peft.git
pip3 install -q -U git+https://github.com/huggingface/accelerate.git
```
### Other dependencies
If you want to finetune the model on a new instance. You could run
the `setup.sh` to install the python and cuda package.
```bash
bash scripts/setup.sh
```
## Finetuning
```bash
bash script/finetune.sh
```
## Usage
Quantization parameters are controlled from the `BitsandbytesConfig`
- Loading in 4 bits is activated through `load_in_4bit`
- The datatype used for the linear layer computations with `bnb_4bit_compute_dtype`
- Nested quantization is activated through `bnb_4bit_use_double_quant`
- The datatype used for qunatization is specified with `bnb_4bit_quant_type`. Note that there are two supported
quantization datatypes `fp4` (four bit float) and `nf4` (normal four bit float). The latter is theoretically optimal
for normally distributed weights and we recommend using `nf4`.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
pretrained_model_name_or_path = "bavest/fin-llama-33b-merge"
model = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path=pretrained_model_name_or_path,
load_in_4bit=True,
device_map='auto',
torch_dtype=torch.bfloat16,
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4'
),
)
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path)
question = "What is the market cap of apple?"
input = "" # context if needed
prompt = f"""
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's question.
'### Instruction:\n{question}\n\n### Input:{input}\n""\n\n### Response:
"""
input_ids = tokenizer.encode(prompt, return_tensors="pt").to('cuda:0')
with torch.no_grad():
generated_ids = model.generate(
input_ids,
do_sample=True,
top_p=0.9,
temperature=0.8,
max_length=128
)
generated_text = tokenizer.decode(
[el.item() for el in generated_ids[0]], skip_special_tokens=True
)
```
## Dataset for FIN-LLAMA
The dataset is released under bigscience-openrail-m.
You can find the dataset used to train FIN-LLAMA models on HF
at [bavest/fin-llama-dataset](https://huggingface.co/datasets/bavest/fin-llama-dataset).
## Known Issues and Limitations
Here a list of known issues and bugs. If your issue is not reported here, please open a new issue and describe the
problem.
See [QLORA](https://github.com/artidoro/qlora) for any other limitations.
1. 4-bit inference is slow. Currently, our 4-bit inference implementation is not yet integrated with the 4-bit matrix
multiplication
2. Currently, using `bnb_4bit_compute_type='fp16'` can lead to instabilities.
3. Make sure that `tokenizer.bos_token_id = 1` to avoid generation issues.
## Acknowledgements
We also thank Meta for releasing the LLaMA models without which this work would not have been possible.
This repo builds on the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
, [QLORA](https://github.com/artidoro/qlora), [Chinese-Guanaco](https://github.com/jianzhnie/Chinese-Guanaco/tree/main)
and [LMSYS FastChat](https://github.com/lm-sys/FastChat) repos.
## License and Intended Use
We release the resources associated with QLoRA finetuning in this repository under GLP3 license. In addition, we release the FIN-LLAMA model family for base LLaMA model sizes of 7B, 13B, 33B, and 65B. These models are intended for purposes in line with the LLaMA license and require access to the LLaMA models.
## Prompts
### Act as an Accountant
> I want you to act as an accountant and come up with creative ways to manage finances. You'll need to consider budgeting, investment strategies and risk management when creating a financial plan for your client. In some cases, you may also need to provide advice on taxation laws and regulations in order to help them maximize their profits. My first suggestion request is “Create a financial plan for a small business that focuses on cost savings and long-term investments".
## Paged Optimizer
You can access the paged optimizer with the argument --optim paged_adamw_32bit
## Cite
```tex
@misc{Fin-LLAMA,
author = {William Todt, Ramtin Babaei, Pedram Babaei},
title = {Fin-LLAMA: Efficient Finetuning of Quantized LLMs for Finance},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/Bavest/fin-llama}},
}
```
|
akdeniz27/LunarLander-v2
|
akdeniz27
| 2023-08-07T09:13:22Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T09:13:17Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -235.91 +/- 123.58
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
mohammedfazilvamos/trained-model
|
mohammedfazilvamos
| 2023-08-07T09:09:49Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-07T05:41:59Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
Shekhar2681/YEAR-1
|
Shekhar2681
| 2023-08-07T09:03:37Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-08-07T09:03:37Z |
---
license: bigscience-bloom-rail-1.0
---
|
tkathuria/finetuning-emotion-model-12000-samples
|
tkathuria
| 2023-08-07T09:01:38Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-07T08:49:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: finetuning-emotion-model-12000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: test
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.92
- name: F1
type: f1
value: 0.920048011482891
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-emotion-model-12000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2588
- Accuracy: 0.92
- F1: 0.9200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
danexl/finetuning-sentiment-model-3000-samples
|
danexl
| 2023-08-07T09:01:17Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-07T08:53:16Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8695652173913044
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3333
- Accuracy: 0.87
- F1: 0.8696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
TheRains/yt-special-batch12-base
|
TheRains
| 2023-08-07T08:54:52Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:yt",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-07T07:37:18Z |
---
license: apache-2.0
base_model: openai/whisper-base
tags:
- whisper-event
- generated_from_trainer
datasets:
- yt
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: yt id
type: yt
metrics:
- name: Wer
type: wer
value: 55.89780169898191
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the yt id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9330
- Wer: 55.8978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 1.0995 | 0.26 | 1000 | 1.1249 | 91.3559 |
| 0.9995 | 0.52 | 2000 | 1.0126 | 68.1344 |
| 0.9872 | 0.77 | 3000 | 0.9620 | 65.9425 |
| 0.7043 | 1.03 | 4000 | 0.9330 | 55.8978 |
| 0.7292 | 1.29 | 5000 | 0.9224 | 62.5057 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Aspik101/llama-30b-2048-instruct-PL-lora_unload
|
Aspik101
| 2023-08-07T08:37:07Z | 1,484 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-07T08:17:15Z |
---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
|
ldhldh/polyglot-ko-12.8b_lora_novel_2kstep
|
ldhldh
| 2023-08-07T08:37:00Z | 7 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-07T08:36:59Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
TheRains/yt-special-batch12-tiny
|
TheRains
| 2023-08-07T08:33:28Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:yt",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-07T07:37:28Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- whisper-event
- generated_from_trainer
datasets:
- yt
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: yt id
type: yt
metrics:
- name: Wer
type: wer
value: 71.27942416185721
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the yt id dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1267
- Wer: 71.2794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.375 | 0.26 | 1000 | 1.3639 | 103.1969 |
| 1.2229 | 0.52 | 2000 | 1.2348 | 81.9791 |
| 1.2384 | 0.77 | 3000 | 1.1719 | 87.5041 |
| 0.9738 | 1.03 | 4000 | 1.1389 | 71.3832 |
| 0.9485 | 1.29 | 5000 | 1.1267 | 71.2794 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
AdirK/q-FrozenLake-v1-4x4-noSlippery
|
AdirK
| 2023-08-07T08:32:54Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T08:32:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="AdirK/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
RoversX/StableBeluga-7B-Qlora-Samantha-Zh-V1
|
RoversX
| 2023-08-07T08:32:54Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"dataset:ehartford/samantha-data",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-03T13:38:43Z |
---
datasets:
- ehartford/samantha-data
language:
- zh
- en
pipeline_tag: text-generation
---
# StableBeluga-7B-Qlora-Samantha-Zh-V1
StableBeluga-7B-Qlora-Samantha-Zh-V1 is a conversational AI assistant base on [stabilityai/StableBeluga-7B](https://huggingface.co/stabilityai/StableBeluga-7B) and trained on the samantha-1.1-zh dataset from [ehartford/samantha-data](https://huggingface.co/datasets/ehartford/samantha-data).
## Model Details

Stable Beluga 7B should be used with this prompt format:
```
### System:
This is a system prompt, please behave and help the user.
### User:
Your prompt here
### Assistant:
The output of Stable Beluga 7B
```
|
Falah/sdxl2033
|
Falah
| 2023-08-07T08:29:47Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-07T08:25:38Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### SDXL2033 Dreambooth model trained by Falah with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
noahkln/vicuna-13b-v1.5-16k-no-cache
|
noahkln
| 2023-08-07T08:28:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2307.09288",
"arxiv:2306.05685",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-08-06T18:52:24Z |
---
inference: false
license: llama2
---
**Note:** This is a preview version. A slightly better checkpoint will be uploaded soon.
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture
- **License:** Llama 2 Community License Agreement
- **Finetuned from model:** [Llama 2](https://arxiv.org/abs/2307.09288)
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
- Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights
- APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api
## Training Details
Vicuna v1.5 (16k) is fine-tuned from Llama 2 with supervised instruction fine-tuning and linear RoPE scaling.
The training data is around 125K conversations collected from ShareGPT.com. These conversations are packed into sequences that contain 16K tokens each.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation

Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
jakobkruse/ppo-LunarLander-v2
|
jakobkruse
| 2023-08-07T07:51:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T07:51:05Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.24 +/- 51.56
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LuizNeves/DeBERTa-v3-large-mnli-fever-anli-ling-wanli-vaccine
|
LuizNeves
| 2023-08-07T07:46:50Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"zero-shot-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2023-08-04T09:49:25Z |
---
pipeline_tag: zero-shot-classification
---
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-1
|
kyleeasterly
| 2023-08-07T07:41:26Z | 5 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-07T07:40:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
VinayHajare/distilhubert-finetuned-gtzan
|
VinayHajare
| 2023-08-07T07:40:35Z | 176 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-29T16:11:35Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.89
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5167
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2163 | 1.0 | 113 | 2.0720 | 0.34 |
| 1.7237 | 2.0 | 226 | 1.5361 | 0.59 |
| 1.3254 | 3.0 | 339 | 1.2044 | 0.65 |
| 1.0757 | 4.0 | 452 | 1.0578 | 0.66 |
| 1.0683 | 5.0 | 565 | 0.8947 | 0.78 |
| 0.9307 | 6.0 | 678 | 0.7716 | 0.82 |
| 1.0313 | 7.0 | 791 | 0.7210 | 0.82 |
| 0.6988 | 8.0 | 904 | 0.6506 | 0.8 |
| 0.8053 | 9.0 | 1017 | 0.5944 | 0.81 |
| 0.6243 | 10.0 | 1130 | 0.5637 | 0.87 |
| 0.6238 | 11.0 | 1243 | 0.5212 | 0.89 |
| 0.4493 | 12.0 | 1356 | 0.5167 | 0.89 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
leahsuperb/q-Taxi-v3
|
leahsuperb
| 2023-08-07T07:40:12Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T07:40:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="leahsuperb/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Aspik101/llama-30b-2048-instruct-PL-lora_adapter_model
|
Aspik101
| 2023-08-07T07:39:33Z | 0 | 0 | null |
[
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"region:us"
] |
text-generation
| 2023-08-07T07:38:42Z |
---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
|
andrei-saceleanu/ro-offense-lora-gpt2
|
andrei-saceleanu
| 2023-08-07T07:37:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-07T07:37:14Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-10
|
kyleeasterly
| 2023-08-07T07:31:52Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-07T07:31:18Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
chuntali/bert-finetuned-ner
|
chuntali
| 2023-08-07T07:26:21Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-07T07:06:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9367130550033135
- name: Recall
type: recall
value: 0.9515314708852238
- name: F1
type: f1
value: 0.9440641175488395
- name: Accuracy
type: accuracy
value: 0.9866221227997881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Precision: 0.9367
- Recall: 0.9515
- F1: 0.9441
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0873 | 1.0 | 1756 | 0.0642 | 0.9146 | 0.9357 | 0.9250 | 0.9835 |
| 0.0341 | 2.0 | 3512 | 0.0674 | 0.9300 | 0.9456 | 0.9378 | 0.9858 |
| 0.017 | 3.0 | 5268 | 0.0623 | 0.9367 | 0.9515 | 0.9441 | 0.9866 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vishnun/codenlbert-tiny
|
vishnun
| 2023-08-07T07:24:22Z | 141 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"code",
"nli",
"en",
"dataset:vishnun/CodevsNL",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-04T06:27:28Z |
---
license: mit
datasets:
- vishnun/CodevsNL
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
tags:
- code
- nli
---
## PreFace
Code vs Natural language classification using bert-small from prajwall, below are the metrics achieved
## Training Metrics
| Epoch | Training Loss | Validation Loss | Accuracy |
|-------|---------------|-----------------|----------|
| 1 | 0.022500 | 0.012705 | 0.997203 |
| 2 | 0.008700 | 0.013107 | 0.996880 |
| 3 | 0.002700 | 0.014081 | 0.997633 |
| 4 | 0.001800 | 0.010666 | 0.997526 |
| 5 | 0.000900 | 0.010800 | 0.998063 |
## More
- Github repo for installable python package: https://github.com/Vishnunkumar
- Space on the extraction of code blocks from screenshots: https://huggingface.co/spaces/vishnun/SnapCode
|
tgieruc/Heritage-in-Digital-Age-distilbert-base-uncased-expression-rating
|
tgieruc
| 2023-08-07T07:24:19Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-28T11:40:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Heritage-in-Digital-Age-distilbert-base-uncased-expression-rating
results: []
widget:
- text: "saint nicholas festival market"
example_title: "Title"
- text: "people buying sweets at the market."
example_title: "Caption"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Heritage-in-Digital-Age-distilbert-base-uncased-expression-rating
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0133
- Accuracy: 0.3496
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 18 | 1.9870 | 0.3333 |
| No log | 2.0 | 36 | 1.9731 | 0.3415 |
| No log | 3.0 | 54 | 2.0133 | 0.3496 |
| No log | 4.0 | 72 | 2.0809 | 0.3415 |
| No log | 5.0 | 90 | 2.1694 | 0.3008 |
| No log | 6.0 | 108 | 2.2611 | 0.2764 |
| No log | 7.0 | 126 | 2.2832 | 0.3008 |
| No log | 8.0 | 144 | 2.3670 | 0.2846 |
| No log | 9.0 | 162 | 2.4279 | 0.2683 |
| No log | 10.0 | 180 | 2.4460 | 0.3089 |
| No log | 11.0 | 198 | 2.5236 | 0.2846 |
| No log | 12.0 | 216 | 2.5896 | 0.3089 |
| No log | 13.0 | 234 | 2.6061 | 0.3008 |
| No log | 14.0 | 252 | 2.6813 | 0.2846 |
| No log | 15.0 | 270 | 2.6990 | 0.3252 |
| No log | 16.0 | 288 | 2.7439 | 0.3171 |
| No log | 17.0 | 306 | 2.7499 | 0.3415 |
| No log | 18.0 | 324 | 2.7737 | 0.3252 |
| No log | 19.0 | 342 | 2.7793 | 0.3252 |
| No log | 20.0 | 360 | 2.7775 | 0.3252 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
s3nh/AlpachinoNLP-Baichuan-7B-Instruction-GGML
|
s3nh
| 2023-08-07T07:23:34Z | 0 | 1 |
transformers
|
[
"transformers",
"text-generation",
"zh",
"en",
"arxiv:2307.00360",
"arxiv:2009.03300",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-07T07:08:16Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/AlpachinoNLP/Baichuan-7B-Instruction).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
# Baichuan-7B-Instruction

<!-- Provide a quick summary of what the model is/does. -->
## 介绍
Baichuan-7B-Instruction 为 Baichuan-7B 系列模型进行指令微调后的版本,预训练模型可见 [Baichuan-7B](https://huggingface.co/baichuan-inc/Baichuan-7B)。
## Demo
如下是一个使用 gradio 的模型 demo
```python
import gradio as gr
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("AlpachinoNLP/Baichuan-7B-Instruction",trust_remote_code=True,use_fast=False)
model = AutoModelForCausalLM.from_pretrained("AlpachinoNLP/Baichuan-7B-Instruction",trust_remote_code=True ).half()
model.cuda()
def generate(histories, max_new_tokens=2048, do_sample = True, top_p = 0.95, temperature = 0.35, repetition_penalty=1.1):
prompt = ""
for history in histories:
history_with_identity = "\nHuman:" + history[0] + "\n\nAssistant:" + history[1]
prompt += history_with_identity
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(model.device)
outputs = model.generate(
input_ids = input_ids,
max_new_tokens=max_new_tokens,
early_stopping=True,
do_sample=do_sample,
top_p=top_p,
temperature=temperature,
repetition_penalty=repetition_penalty,
)
rets = tokenizer.batch_decode(outputs, skip_special_tokens=True)
generate_text = rets[0].replace(prompt, "")
return generate_text
with gr.Blocks() as demo:
chatbot = gr.Chatbot()
msg = gr.Textbox()
clear = gr.Button("clear")
def user(user_message, history):
return "", history + [[user_message, ""]]
def bot(history):
print(history)
bot_message = generate(history)
history[-1][1] = bot_message
return history
msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then(
bot, chatbot, chatbot
)
clear.click(lambda: None, None, chatbot, queue=False)
if __name__ == "__main__":
demo.launch(server_name="0.0.0.0")
```
## 量化部署
Baichuan-7B 支持 int8 和 int4 量化,用户只需在推理代码中简单修改两行即可实现。请注意,如果是为了节省显存而进行量化,应加载原始精度模型到 CPU 后再开始量化;避免在 `from_pretrained` 时添加 `device_map='auto'` 或者其它会导致把原始精度模型直接加载到 GPU 的行为的参数。
使用 int8 量化 (To use int8 quantization):
```python
model = AutoModelForCausalLM.from_pretrained("AlpachinoNLP/Baichuan-7B-Instruction", torch_dtype=torch.float16, trust_remote_code=True)
model = model.quantize(8).cuda()
```
同样的,如需使用 int4 量化 (Similarly, to use int4 quantization):
```python
model = AutoModelForCausalLM.from_pretrained("AlpachinoNLP/Baichuan-7B-Instruction", torch_dtype=torch.float16, trust_remote_code=True)
model = model.quantize(4).cuda()
```
## 训练详情
数据集:https://huggingface.co/datasets/shareAI/ShareGPT-Chinese-English-90k。
硬件:8*A40
## 测评结果
## [CMMLU](https://github.com/haonan-li/CMMLU)
| Model 5-shot | STEM | Humanities | Social Sciences | Others | China Specific | Average |
| ---------------------------------------------------------- | :-------: | :--------: | :-------------: | :------: | :------------: | :------: |
| Baichuan-7B | 34.4 | 47.5 | 47.6 | 46.6 | 44.3 | 44.0 |
| Vicuna-13B | 31.8 | 36.2 | 37.6 | 39.5 | 34.3 | 36.3 |
| Chinese-Alpaca-Plus-13B | 29.8 | 33.4 | 33.2 | 37.9 | 32.1 | 33.4 |
| Chinese-LLaMA-Plus-13B | 28.1 | 33.1 | 35.4 | 35.1 | 33.5 | 33.0 |
| Ziya-LLaMA-13B-Pretrain | 29.0 | 30.7 | 33.8 | 34.4 | 31.9 | 32.1 |
| LLaMA-13B | 29.2 | 30.8 | 31.6 | 33.0 | 30.5 | 31.2 |
| moss-moon-003-base (16B) | 27.2 | 30.4 | 28.8 | 32.6 | 28.7 | 29.6 |
| Baichuan-13B-Base | 41.7 | 61.1 | 59.8 | 59.0 | 56.4 | 55.3 |
| Baichuan-13B-Chat | 42.8 | 62.6 | 59.7 | 59.0 | 56.1 | 55.8 |
| Baichuan-13B-Instruction | 44.50 | 61.16 | 59.07 | 58.34 | 55.55 | 55.61 |
| **Baichuan-7B-Instruction** | **34.68** | **47.38** | **47.13** | **45.11** | **44.51** | **43.57** |
| Model zero-shot | STEM | Humanities | Social Sciences | Others | China Specific | Average |
| ------------------------------------------------------------ | :-------: | :--------: | :-------------: | :-------: | :------------: | :-------: |
| [ChatGLM2-6B](https://huggingface.co/THUDM/chatglm2-6b) | 41.28 | 52.85 | 53.37 | 52.24 | 50.58 | 49.95 |
| [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) | 32.79 | 44.43 | 46.78 | 44.79 | 43.11 | 42.33 |
| [ChatGLM-6B](https://github.com/THUDM/GLM-130B) | 32.22 | 42.91 | 44.81 | 42.60 | 41.93 | 40.79 |
| [BatGPT-15B](https://arxiv.org/abs/2307.00360) | 33.72 | 36.53 | 38.07 | 46.94 | 38.32 | 38.51 |
| [Chinese-LLaMA-7B](https://github.com/ymcui/Chinese-LLaMA-Alpaca) | 26.76 | 26.57 | 27.42 | 28.33 | 26.73 | 27.34 |
| [MOSS-SFT-16B](https://github.com/OpenLMLab/MOSS) | 25.68 | 26.35 | 27.21 | 27.92 | 26.70 | 26.88 |
| [Chinese-GLM-10B](https://github.com/THUDM/GLM) | 25.57 | 25.01 | 26.33 | 25.94 | 25.81 | 25.80 |
| [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-7B) | 42.04 | 60.49 | 59.55 | 56.60 | 55.72 | 54.63 |
| [Baichuan-13B-Chat](https://github.com/baichuan-inc/Baichuan-7B) | 37.32 | 56.24 | 54.79 | 54.07 | 52.23 | 50.48 |
| Baichuan-13B-Instruction | 42.56 | 62.09 | 60.41 | 58.97 | 56.95 | 55.88 |
| **Baichuan-7B-Instruction** | **33.94** | **46.31** | **47.73** | **45.84** | **44.88** | **43.53** |
> 说明:CMMLU 是一个综合性的中文评估基准,专门用于评估语言模型在中文语境下的知识和推理能力。我们直接使用其官方的[评测脚本](https://github.com/haonan-li/CMMLU)对模型进行评测。Model zero-shot 表格中 [Baichuan-13B-Chat](https://github.com/baichuan-inc/Baichuan-13B) 的得分来自我们直接运行 CMMLU 官方的评测脚本得到,其他模型的的得分来自于 [CMMLU](https://github.com/haonan-li/CMMLU/tree/master) 官方的评测结果.
### 英文能力评测
除了中文榜单的测试,我们同样测试了模型在英文榜单 MMLU 上的能力。
#### MMLU
[MMLU](https://arxiv.org/abs/2009.03300) 是一个包含了57种任务的英文评测数据集。
我们采用了开源的[评测方案]((https://github.com/hendrycks/test)) , 评测结果如下:
| Model | Humanities | Social Sciences | STEM | Other | Average |
|----------------------------------------|-----------:|:---------------:|:----:|:-----:|:-------:|
| LLaMA-7B<sup>2</sup> | 34.0 | 38.3 | 30.5 | 38.1 | 35.1 |
| Falcon-7B<sup>1</sup> | - | - | - | - | 35.0 |
| mpt-7B<sup>1</sup> | - | - | - | - | 35.6 |
| ChatGLM-6B<sup>0</sup> | 35.4 | 41.0 | 31.3 | 40.5 | 36.9 |
| BLOOM 7B<sup>0</sup> | 25.0 | 24.4 | 26.5 | 26.4 | 25.5 |
| BLOOMZ 7B<sup>0</sup> | 31.3 | 42.1 | 34.4 | 39.0 | 36.1 |
| moss-moon-003-base (16B)<sup>0</sup> | 24.2 | 22.8 | 22.4 | 24.4 | 23.6 |
| moss-moon-003-sft (16B)<sup>0</sup> | 30.5 | 33.8 | 29.3 | 34.4 | 31.9 |
| Baichuan-7B<sup>0</sup> | 38.4 | 48.9 | 35.6 | 48.1 | 42.3 |
| **Baichuan-7B-Instruction(5-shot)** | **38.9** | **49.0** | **35.3** | **48.8** | **42.6** |
| **Baichuan-7B-Instruction(0-shot)** | **38.7** | **47.9** | **34.5** | **48.2** | **42.0** |
|
kyleeasterly/openllama-7b_purple-aerospace-v1-80-14
|
kyleeasterly
| 2023-08-07T07:22:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-07T07:14:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
srikanthsri/Linxx
|
srikanthsri
| 2023-08-07T07:13:58Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-07T07:13:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
bitwild/Taxi-v3
|
bitwild
| 2023-08-07T07:09:13Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T07:03:58Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="bitwild/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
s3nh/llama2_13b_chat_uncensored-GGML
|
s3nh
| 2023-08-07T06:53:15Z | 0 | 7 |
transformers
|
[
"transformers",
"text-generation",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-06T18:07:01Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/arogov/llama2_13b_chat_uncensored).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
# Overview
Fine-tuned [Llama-2 13B](https://huggingface.co/TheBloke/Llama-2-13B-fp16) with an uncensored/unfiltered Wizard-Vicuna conversation dataset [ehartford/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered).
Used QLoRA for fine-tuning. Trained for one epoch on a two 24GB GPU (NVIDIA RTX 3090) instance, took ~26.5 hours to train.
```
{'train_runtime': 95229.7197, 'train_samples_per_second': 0.363, 'train_steps_per_second': 0.091, 'train_loss': 0.5828390517308127, 'epoch': 1.0}
100%|██████████████████████████████████████████████████████████████████████████████████████████████| 8649/8649 [26:27:09<00:00, 11.01s/it]
Training complete, adapter model saved in models//llama2_13b_chat_uncensored_adapter
```
The version here is the fp16 HuggingFace model.
## GGML & GPTQ versions
Thanks to [TheBloke](https://huggingface.co/TheBloke), he has created the GGML and GPTQ versions:
* https://huggingface.co/TheBloke/Llama-2-13B-GGML
* https://huggingface.co/TheBloke/Llama-2-13B-GPTQ
# Prompt style
The model was trained with the following prompt style:
```
### HUMAN:
Hello
### RESPONSE:
Hi, how are you?
### HUMAN:
I'm fine.
### RESPONSE:
How can I help you?
...
```
# Training code
Code used to train the model is available [here](https://github.com/georgesung/llm_qlora).
To reproduce the results:
```
git clone https://github.com/georgesung/llm_qlora
cd llm_qlora
pip install -r requirements.txt
python train.py configs/llama2_13b_chat_uncensored.yaml
```
# Fine-tuning guide
https://georgesung.github.io/ai/qlora-ift/
|
ThaumielSparrow/nnue-unet
|
ThaumielSparrow
| 2023-08-07T06:51:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-07T06:49:17Z |
# Efficiently-Updatable Neural Network (NNUE) Refactor of Classic U-Net Architecture for Membrane Segmentation
### Developed by Luzhou Zhang - Project still under development
🧠
## Setup
Clone into repository: `git clone https://github.com/ThaumielSparrow/cremi-nnue`
Install dependencies: `pip install -r requirements.txt`
Download CREMI training and test data [here](https://cremi.org/data/).
Modify runtime variables in `main.py` and `train.py` and run program: `python main.py`
Note: This project has only been tested and validated for Python 3.9.X and 3.10.X with frozen packages. It is likely that any Python version >3.7 supports it.
## Docs
I'm not writing documentation lol
|
TinToTin/ppo-LunarLander-v2
|
TinToTin
| 2023-08-07T06:49:56Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T06:49:37Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.06 +/- 23.53
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AmelieSchreiber/esm2_t12_35M_LoRA_RNA_binding
|
AmelieSchreiber
| 2023-08-07T06:36:45Z | 7 | 1 |
peft
|
[
"peft",
"transformers",
"biology",
"esm",
"esm2",
"protein",
"protein language model",
"en",
"license:mit",
"region:us"
] | null | 2023-08-07T06:21:02Z |
---
library_name: peft
license: mit
language:
- en
tags:
- transformers
- biology
- esm
- esm2
- protein
- protein language model
---
# ESM-2 RNA Binding Site LoRA
This is a Parameter Efficient Fine Tuning (PEFT) Low Rank Adaptation ([LoRA](https://huggingface.co/docs/peft/task_guides/token-classification-lora)) of
the [esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) model for the (binary) token classification task of
predicting RNA binding sites of proteins. The Github with the training script and conda env YAML can be
[found here](https://github.com/Amelie-Schreiber/esm2_LoRA_binding_sites/tree/main). You can also find a version of this model
that was fine-tuned without LoRA [here](https://huggingface.co/AmelieSchreiber/esm2_t6_8M_UR50D_rna_binding_site_predictor).
## Training procedure
This is a Low Rank Adaptation (LoRA) of `esm2_t6_8M_UR50D`,
trained on `166` protein sequences in the [RNA binding sites dataset](https://huggingface.co/datasets/AmelieSchreiber/data_of_protein-rna_binding_sites)
using a `75/25` train/test split. It achieves an evaluation loss of `0.18801096081733704`.
### Framework versions
- PEFT 0.4.0
## Using the Model
To use, try running:
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer
from peft import PeftModel
import torch
# Path to the saved LoRA model
model_path = "AmelieSchreiber/esm2_t12_35M_LoRA_RNA_binding"
# ESM2 base model
base_model_path = "facebook/esm2_t12_35M_UR50D"
# Load the model
base_model = AutoModelForTokenClassification.from_pretrained(base_model_path)
loaded_model = PeftModel.from_pretrained(base_model, model_path)
# Ensure the model is in evaluation mode
loaded_model.eval()
# Load the tokenizer
loaded_tokenizer = AutoTokenizer.from_pretrained(base_model_path)
# Protein sequence for inference
protein_sequence = "MAVPETRPNHTIYINNLNEKIKKDELKKSLHAIFSRFGQILDILVSRSLKMRGQAFVIFKEVSSATNALRSMQGFPFYDKPMRIQYAKTDSDIIAKMKGT" # Replace with your actual sequence
# Tokenize the sequence
inputs = loaded_tokenizer(protein_sequence, return_tensors="pt", truncation=True, max_length=1024, padding='max_length')
# Run the model
with torch.no_grad():
logits = loaded_model(**inputs).logits
# Get predictions
tokens = loaded_tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) # Convert input ids back to tokens
predictions = torch.argmax(logits, dim=2)
# Define labels
id2label = {
0: "No binding site",
1: "Binding site"
}
# Print the predicted labels for each token
for token, prediction in zip(tokens, predictions[0].numpy()):
if token not in ['<pad>', '<cls>', '<eos>']:
print((token, id2label[prediction]))
```
|
shtif/poca-SoccerTwos
|
shtif
| 2023-08-07T06:29:26Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-08-07T06:26:43Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: shtif/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KUN810/lora_of_Benares_from_Honkai_Ipmact_3rd
|
KUN810
| 2023-08-07T06:26:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-07T05:28:09Z |
崩坏3贝纳勒斯的lora,由于图源较少因此有些过拟合。
例图别为本lora效果和配合细节增强(add_detail.safetensors)的效果。
!(https://huggingface.co/KUN810/lora_Honkai_Impact_3rd_Benares/blob/main/15974-832718923-dramatic%20angle%2C%20(honkai%20impact%203rd)%2C%20dutch%20angle%2C%20_(((masterpiece)))%2C%20((extremely%20detailed%20CG%20unity%204k%20wallpaper))%2C%20best%20quality.png " ")
!(https://huggingface.co/KUN810/lora_Honkai_Impact_3rd_Benares/blob/main/15979-1747533505-dramatic%20angle%2C%20(honkai%20impact%203rd)%2C%20dutch%20angle%2C%20%2C%20_(((masterpiece)))%2C%20((extremely%20detailed%20CG%20unity%204k%20wallpaper))%2C%20best%20quali.png " ")
|
TheRains/yt-special-batch4-base
|
TheRains
| 2023-08-07T06:19:37Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:yt",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-07T05:07:54Z |
---
license: apache-2.0
base_model: openai/whisper-base
tags:
- whisper-event
- generated_from_trainer
datasets:
- yt
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: yt id
type: yt
metrics:
- name: Wer
type: wer
value: 66.04630049931912
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the yt id dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0175
- Wer: 66.0463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4446 | 0.09 | 1000 | 1.2313 | 91.5959 |
| 1.0599 | 0.17 | 2000 | 1.1312 | 106.3420 |
| 1.1851 | 0.26 | 3000 | 1.0801 | 77.3166 |
| 1.0325 | 0.34 | 4000 | 1.0380 | 71.8436 |
| 1.008 | 0.43 | 5000 | 1.0175 | 66.0463 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
TheRains/yt-special-batch8-tiny
|
TheRains
| 2023-08-07T06:13:43Z | 87 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:yt",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-05T05:06:05Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- whisper-event
- generated_from_trainer
datasets:
- yt
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: yt id
type: yt
metrics:
- name: Wer
type: wer
value: 76.37636988522145
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the yt id dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1651
- Wer: 76.3764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4295 | 0.17 | 1000 | 1.4094 | 110.1550 |
| 1.3042 | 0.34 | 2000 | 1.2886 | 86.2914 |
| 1.2212 | 0.52 | 3000 | 1.2206 | 84.1191 |
| 1.1306 | 0.69 | 4000 | 1.1814 | 78.1532 |
| 1.1333 | 0.86 | 5000 | 1.1651 | 76.3764 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hw2942/bert-base-chinese-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1
|
hw2942
| 2023-08-07T06:10:49Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-chinese",
"base_model:finetune:google-bert/bert-base-chinese",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-07T06:01:30Z |
---
base_model: bert-base-chinese
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-chinese-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3043
- F1: 0.4167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 38 | 0.6797 | 0.0 |
| No log | 2.0 | 76 | 0.6726 | 0.1538 |
| No log | 3.0 | 114 | 0.6660 | 0.6154 |
| No log | 4.0 | 152 | 0.7310 | 0.4545 |
| No log | 5.0 | 190 | 0.8288 | 0.5926 |
| No log | 6.0 | 228 | 0.9843 | 0.4545 |
| No log | 7.0 | 266 | 1.4159 | 0.4545 |
| No log | 8.0 | 304 | 1.9705 | 0.4348 |
| No log | 9.0 | 342 | 2.2006 | 0.4167 |
| No log | 10.0 | 380 | 2.3043 | 0.4167 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
wonwonn/distilbert-base-uncased-finetuned-emotion
|
wonwonn
| 2023-08-07T05:58:33Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-07T05:29:53Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.9207589885424755
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2281
- Accuracy: 0.921
- F1: 0.9208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8331 | 1.0 | 250 | 0.3266 | 0.904 | 0.9019 |
| 0.2535 | 2.0 | 500 | 0.2281 | 0.921 | 0.9208 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
hw2942/mengzi-bert-base-fin-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1
|
hw2942
| 2023-08-07T05:49:43Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:Langboat/mengzi-bert-base-fin",
"base_model:finetune:Langboat/mengzi-bert-base-fin",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-07T05:37:16Z |
---
license: apache-2.0
base_model: Langboat/mengzi-bert-base-fin
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mengzi-bert-base-fin-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mengzi-bert-base-fin-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1
This model is a fine-tuned version of [Langboat/mengzi-bert-base-fin](https://huggingface.co/Langboat/mengzi-bert-base-fin) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0486
- F1: 0.4706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 38 | 0.6755 | 0.0 |
| No log | 2.0 | 76 | 0.6067 | 0.2857 |
| No log | 3.0 | 114 | 0.6956 | 0.4211 |
| No log | 4.0 | 152 | 0.5666 | 0.5714 |
| No log | 5.0 | 190 | 0.6870 | 0.4444 |
| No log | 6.0 | 228 | 0.8044 | 0.4706 |
| No log | 7.0 | 266 | 0.9209 | 0.4706 |
| No log | 8.0 | 304 | 0.9736 | 0.4706 |
| No log | 9.0 | 342 | 1.0042 | 0.4706 |
| No log | 10.0 | 380 | 1.0486 | 0.4706 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Andyrasika/GroundingDINO
|
Andyrasika
| 2023-08-07T05:41:14Z | 0 | 3 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-02T09:55:40Z |
---
license: creativeml-openrail-m
---
Citations
```
@inproceedings{AndyRasika,
title={Grounding DINO: Marrying DINO v2 with Grounded Pre-Training for Open-Set Object Detection},
author={Ankush Singal},
year={2023}
}
```
```
def load_model_hf(repo_id, filename, ckpt_config_filename, device='cpu'):
cache_config_file = hf_hub_download(repo_id=repo_id, filename=ckpt_config_filename)
args = SLConfig.fromfile(cache_config_file)
model = build_model(args)
args.device = device
cache_file = hf_hub_download(repo_id=repo_id, filename=filename)
checkpoint = torch.load(cache_file, map_location='cpu')
log = model.load_state_dict(clean_state_dict(checkpoint['model']), strict=False)
print("Model loaded from {} \n => {}".format(cache_file, log))
_ = model.eval()
return model
ckpt_repo_id = "Andyrasika/GroundingDINO"
ckpt_filenmae = "groundingdino_swint_ogc.pth"
ckpt_config_filename = "GroundingDINO_SwinT_OGC.py"
model = load_model_hf(ckpt_repo_id, ckpt_filenmae, ckpt_config_filename)
```
|
dkimds/q-Taxi-v3
|
dkimds
| 2023-08-07T05:09:49Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T05:09:45Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dkimds/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
KnutJaegersberg/galactica-orca-wizardlm-1.3b
|
KnutJaegersberg
| 2023-08-07T04:50:46Z | 1,439 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"opt",
"text-generation",
"autotrain",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-26T13:09:09Z |
---
tags:
- autotrain
- text-generation
license: cc-by-nc-4.0
---
Prompt
### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n
|
avanish07/Hindi-wiki-LLaMA
|
avanish07
| 2023-08-07T03:53:11Z | 6 | 0 |
peft
|
[
"peft",
"text-generation",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-08-06T15:27:35Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: peft
---
## Hindi-wiki-LLaMA
Hindi Wikipedia Article Generation Model
This repository contains a language generation model trained on Hindi Wikipedia articles using the Hugging Face Transformers library. The model is based on the Llama-2 architecture and fine-tuned on a large dataset of Hindi text from Wikipedia.
## Model Details
- Base Model: Llama-2
- Pretraining Dataset: Hindi Wikipedia Articles
- Tokenizer: Hugging Face Tokenizer
- Model Architecture: Causal Language Modeling
```python
from peft import AutoPeftModelForCausalLM
base_model_name = "meta-llama/Llama-2-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(base_model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
output_dir = "./final_checkpoint"
device_map = {"": 0}
model = AutoPeftModelForCausalLM.from_pretrained(output_dir, device_map=device_map, torch_dtype=torch.bfloat16)
device = torch.device("cuda")
text = ""
inputs = tokenizer(text, return_tensors="pt").to(device)
outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"), attention_mask=inputs["attention_mask"], max_new_tokens=100, pad_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0][len(inputs["input_ids"][0]):], skip_special_tokens=True))
```
## Model Performance:--
The model has been trained on a substantial amount of Hindi Wikipedia articles, which allows it to generate coherent and contextually relevant text.
|
thisiskeithkwan/whisper-medium-1000steps
|
thisiskeithkwan
| 2023-08-07T03:50:36Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:thisiskeithkwan/canto",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-07T01:06:39Z |
---
language:
- zh
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- thisiskeithkwan/canto
model-index:
- name: whisper-medium-cantonese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-cantonese
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the thisiskeithkwan/canto dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7006
- Cer: 3.6111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6458 | 0.76 | 500 | 0.7109 | 3.5960 |
| 0.4183 | 1.52 | 1000 | 0.7006 | 3.6111 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
TheRains/yt-special-batch12-small
|
TheRains
| 2023-08-07T03:49:24Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:yt",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-06T14:31:41Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- whisper-event
- generated_from_trainer
datasets:
- yt
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: yt id
type: yt
metrics:
- name: Wer
type: wer
value: 40.08170676350431
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the yt id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6718
- Wer: 40.0817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.8104 | 0.26 | 1000 | 0.8244 | 49.7374 |
| 0.7059 | 0.52 | 2000 | 0.7380 | 47.9671 |
| 0.7127 | 0.77 | 3000 | 0.6957 | 48.8360 |
| 0.5311 | 1.03 | 4000 | 0.6718 | 40.0817 |
| 0.47 | 1.29 | 5000 | 0.6645 | 40.4254 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
shubhamagarwal92/a2c-AntBulletEnv-v0
|
shubhamagarwal92
| 2023-08-07T03:28:34Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T07:05:36Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1457.50 +/- 109.67
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
GFazzito/distilhubert-finetuned-gtzan
|
GFazzito
| 2023-08-07T03:21:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-21T00:04:55Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.82
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5513
- Accuracy: 0.82
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9949 | 1.0 | 113 | 1.8096 | 0.5 |
| 1.3453 | 2.0 | 226 | 1.2502 | 0.62 |
| 1.0267 | 3.0 | 339 | 0.9683 | 0.73 |
| 0.8382 | 4.0 | 452 | 0.8201 | 0.74 |
| 0.6864 | 5.0 | 565 | 0.6620 | 0.81 |
| 0.3746 | 6.0 | 678 | 0.8011 | 0.74 |
| 0.2883 | 7.0 | 791 | 0.5384 | 0.86 |
| 0.1192 | 8.0 | 904 | 0.4698 | 0.85 |
| 0.2028 | 9.0 | 1017 | 0.4610 | 0.85 |
| 0.1638 | 10.0 | 1130 | 0.5513 | 0.82 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Eggsbena/model_009
|
Eggsbena
| 2023-08-07T03:17:45Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-07T03:04:57Z |
---
library_name: diffusers
pipeline_tag: text-to-image
---
|
saefro991/tts_bytes_css10_7lang_textpretrain_residual_freeze
|
saefro991
| 2023-08-07T03:01:26Z | 3 | 1 |
espnet
|
[
"espnet",
"audio",
"text-to-speech",
"multilingual",
"dataset:masmultts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
text-to-speech
| 2023-08-07T02:45:09Z |
---
tags:
- espnet
- audio
- text-to-speech
language: multilingual
datasets:
- masmultts
license: cc-by-4.0
---
## ESPnet2 TTS model
### `saefro991/tts_bytes_css10_7lang_textpretrain_residual_freeze`
This model was trained by Takaaki-Saeki using masmultts recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 11a7d61312439111d4996d55935ede718d494262
pip install -e .
cd egs2/masmultts/tts_byte_css10_adap_residual_freeze
./run.sh --skip_data_prep false --skip_train true --download_model saefro991/tts_bytes_css10_7lang_textpretrain_residual_freeze
```
## TTS config
<details><summary>expand</summary>
```
config: conf/train.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_raw_byte
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 1
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 200
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 3
nbest_averaging_interval: 0
grad_clip: 2.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- ../tts_pretrain_byte_residual/exp/tts_train_byte/2epoch.pth:tts_pretrain.encoder:tts.encoder
- ../tts_pretrain_byte_residual/exp/tts_train_byte/2epoch.pth:tts_pretrain.lid_emb:tts.lid_emb
ignore_init_mismatch: false
freeze_param:
- tts.encoder.adapter
- tts.encoder.embed
- tts.lid_emb
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 400000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_byte/train/text_shape.byte
- exp/tts_stats_raw_byte/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_byte/valid/text_shape.byte
- exp/tts_stats_raw_byte/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - /local/11399690.1.gpu/dump/raw/train/text
- text
- text
- - /local/11399690.1.gpu/dump/raw/train/wav.scp
- speech
- sound
- - /local/11399690.1.gpu/dump/xvector/train/xvector.scp
- spembs
- kaldi_ark
- - /local/11399690.1.gpu/dump/raw/train/utt2lid
- lids
- text_int
valid_data_path_and_name_and_type:
- - /local/11399690.1.gpu/dump/raw/dev/text
- text
- text
- - /local/11399690.1.gpu/dump/raw/dev/wav.scp
- speech
- sound
- - /local/11399690.1.gpu/dump/xvector/dev/xvector.scp
- spembs
- kaldi_ark
- - /local/11399690.1.gpu/dump/raw/dev/utt2lid
- lids
- text_int
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
model_size: 512
warmup_steps: 50000
token_list:
- <blank>
- <unk>
- '32'
- '101'
- '97'
- '105'
- '110'
- '116'
- '111'
- '115'
- '114'
- '108'
- '100'
- '117'
- '109'
- '99'
- '195'
- '112'
- '104'
- '118'
- '107'
- '103'
- '98'
- '122'
- '102'
- '106'
- '121'
- '119'
- '164'
- '169'
- '197'
- '196'
- '161'
- '113'
- '179'
- '173'
- '188'
- '182'
- '190'
- '208'
- '120'
- '141'
- '153'
- '160'
- '155'
- '189'
- '131'
- '186'
- '168'
- '133'
- '209'
- '130'
- '181'
- '159'
- '151'
- '175'
- '177'
- '145'
- '171'
- '174'
- '165'
- '135'
- '200'
- '180'
- '170'
- '178'
- '176'
- '163'
- '184'
- '185'
- '187'
- '129'
- '132'
- '128'
- '136'
- '143'
- '162'
- '191'
- '150'
- '206'
- '183'
- '140'
- '172'
- '167'
- '207'
- '139'
- '142'
- '147'
- '134'
- '137'
- '148'
- '194'
- '149'
- '166'
- '49'
- '50'
- '48'
- '51'
- '138'
- '56'
- '53'
- '55'
- '52'
- '54'
- '57'
- '199'
- '226'
- '210'
- '144'
- '203'
- '225'
- '202'
- '232'
- '201'
- '157'
- '231'
- '156'
- '220'
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: byte
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: byte
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 16000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_stats_raw_byte/train/feats_stats.npz
tts: transformer
tts_conf:
embed_dim: 0
eprenet_conv_layers: 0
eprenet_conv_filts: 0
eprenet_conv_chans: 0
dprenet_layers: 2
dprenet_units: 256
adim: 512
aheads: 8
elayers: 6
eunits: 1024
dlayers: 6
dunits: 1024
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 1
postnet_layers: 5
postnet_filts: 5
postnet_chans: 256
spk_embed_dim: 192
spk_embed_integration_type: add
use_gst: true
gst_heads: 4
gst_tokens: 16
use_masking: true
bce_pos_weight: 5.0
use_scaled_pos_enc: true
encoder_normalize_before: true
decoder_normalize_before: true
reduction_factor: 1
init_type: xavier_uniform
init_enc_alpha: 1.0
init_dec_alpha: 1.0
eprenet_dropout_rate: 0.0
dprenet_dropout_rate: 0.5
postnet_dropout_rate: 0.5
transformer_enc_dropout_rate: 0.1
transformer_enc_positional_dropout_rate: 0.1
transformer_enc_attn_dropout_rate: 0.1
transformer_dec_dropout_rate: 0.1
transformer_dec_positional_dropout_rate: 0.1
transformer_dec_attn_dropout_rate: 0.1
transformer_enc_dec_attn_dropout_rate: 0.1
use_guided_attn_loss: true
num_heads_applied_guided_attn: 2
num_layers_applied_guided_attn: 2
modules_applied_guided_attn:
- encoder-decoder
guided_attn_loss_sigma: 0.4
guided_attn_loss_lambda: 10.0
langs: 21
lang_family_encoding: false
num_lang_family: 7
use_adapter: true
adapter_type: residual
use_encoder_w_lid: true
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202209'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mrkusypl/MexicanoTV
|
mrkusypl
| 2023-08-07T02:57:15Z | 0 | 0 | null |
[
"pl",
"region:us"
] | null | 2023-08-01T20:57:37Z |
---
language:
- pl
---
<center>
<img src="https://cdn.discordapp.com/attachments/1136043395123515465/1136043395928825957/comment_7oiVx1SlO3f8Ub44Vb0718v2vZin7XUk.png"></img>
<h1>MexicanoTV (RVC v2) (Mangio Crepe 64) (400 Epochs)</h1>
**Model by:** kusy <br/>
**Voice Actor:** Jarosław Andrzejewski <br/>
**Dataset:** 00:17:40 <br/>
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1136043395123515465/1137050343440650341/example.mp3" type="audio/mpeg">
</audio><br />
<audio controls>
<source src="https://cdn.discordapp.com/attachments/1136043395123515465/1137932262139248741/gadanie.wav" type="audio/wav">
</audio>
<a href="https://huggingface.co/mrkusypl/MexicanoTV/resolve/main/MexicanoTV%20%5B400%20epoch%20%2B%20RVC%20v2%5D.zip">Download or copy the link</a>
</center>
|
Yacong/lora-trained-xl
|
Yacong
| 2023-08-07T02:50:49Z | 3 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-07T01:33:38Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Yacong/lora-trained-xl
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
saefro991/tts_ipa_css10_7lang_textpretrain_residual_freeze
|
saefro991
| 2023-08-07T02:39:31Z | 1 | 2 |
espnet
|
[
"espnet",
"audio",
"text-to-speech",
"multilingual",
"dataset:masmultts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
text-to-speech
| 2023-08-07T02:26:59Z |
---
tags:
- espnet
- audio
- text-to-speech
language: multilingual
datasets:
- masmultts
license: cc-by-4.0
---
## ESPnet2 TTS model
### `saefro991/tts_ipa_css10_7lang_textpretrain_residual_freeze`
This model was trained by Takaaki-Saeki using masmultts recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 11a7d61312439111d4996d55935ede718d494262
pip install -e .
cd egs2/masmultts/tts_phn_css10_adap_residual_freeze
./run.sh --skip_data_prep false --skip_train true --download_model saefro991/tts_ipa_css10_7lang_textpretrain_residual_freeze
```
## TTS config
<details><summary>expand</summary>
```
config: conf/train.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_raw_phn_none
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 1
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 200
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 3
nbest_averaging_interval: 0
grad_clip: 2.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- ../tts_pretrain_phn_residual/exp/tts_train_phn_none/2epoch.pth:tts_pretrain.encoder:tts.encoder
- ../tts_pretrain_phn_residual/exp/tts_train_phn_none/2epoch.pth:tts_pretrain.lid_emb:tts.lid_emb
ignore_init_mismatch: false
freeze_param:
- tts.encoder.adapter
- tts.encoder.embed
- tts.lid_emb
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 400000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_phn_none/train/text_shape.phn
- exp/tts_stats_raw_phn_none/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_phn_none/valid/text_shape.phn
- exp/tts_stats_raw_phn_none/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - /local/11454483.1.gpu/dump/raw/train/text
- text
- text
- - /local/11454483.1.gpu/dump/raw/train/wav.scp
- speech
- sound
- - /local/11454483.1.gpu/dump/xvector/train/xvector.scp
- spembs
- kaldi_ark
- - /local/11454483.1.gpu/dump/raw/train/utt2lid
- lids
- text_int
valid_data_path_and_name_and_type:
- - /local/11454483.1.gpu/dump/raw/dev/text
- text
- text
- - /local/11454483.1.gpu/dump/raw/dev/wav.scp
- speech
- sound
- - /local/11454483.1.gpu/dump/xvector/dev/xvector.scp
- spembs
- kaldi_ark
- - /local/11454483.1.gpu/dump/raw/dev/utt2lid
- lids
- text_int
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
model_size: 512
warmup_steps: 50000
token_list:
- <blank>
- <unk>
- n
- t
- s
- l
- a
- e
- k
- d
- m
- ə
- r
- i
- p
- o
- v
- ɪ
- ˈa
- ɾ
- j
- z
- ˈɛ
- ˈe
- ɛ
- b
- ˈo
- f
- ˈi
- u
- ð
- ʁ
- h
- ɡ
- ɔ
- ʃ
- ˈu
- w
- ˌe
- ts
- ŋ
- ˌa
- æ
- iː
- ˈɪ
- ˈiː
- ˈaː
- ɹ
- ʊ
- ɑ
- ˈeː
- ˈɔ
- x
- aː
- tʃ
- ˌi
- ˌo
- tː
- oː
- ɣ
- ˈoː
- eː
- y
- θ
- ɲ
- ə-
- ʋ
- ʒ
- ˌɛ
- ˈɑ
- β
- uː
- ˈuː
- ˈaɪ
- ç
- ˈɑ̃
- ˈɔ̃
- ˈæ
- ɚ
- ˌɪ
- ɑ̃
- ˌu
- ˌɔ
- ˈy
- ɜ
- tʲ
- ˈeɪ
- ˈɑː
- ˌeː
- ʌ
- ᵻ
- ɐ
- ˌɑ
- ɨ
- ɔ̃
- dʒ
- e-
- ˌiː
- a-
- ˈʌ
- ˌʊ
- əl
- ʎ
- ˌaɪ
- aɪ
- ˈɔː
- ss
- ˈaʊ
- rʲ
- kː
- ˈoʊ
- ˌaː
- ɑː
- nʲ
- ˌoː
- ø
- ˈɛɪ
- ɛɪ
- ˌæ
- ʂ
- ɲʲ
- ˌɑː
- ɕ
- ˈai
- vʲ
- dʲ
- ai
- ei
- ɛ̃
- mʲ
- ˈø
- ɭ
- ˈɵ
- pː
- ˈɛ̃
- ɔː
- oʊ
- ˈɜː
- ˈʊ
- tɕ
- ɟ
- ˌaʊ
- ˈœ
- kʲ
- ˈuo
- ˈoi
- æː
- dʑ
- l̩
- ˈie
- ɪː
- ie
- oi
- ˌeɪ
- ˈɨ
- yː
- ˈɪː
- ˌy
- øː
- ˈʏ
- ˈɛː
- ˈoːɹ
- ˌuː
- ˌʌ
- ˈeu
- ˈei
- aʊ
- ˌoi
- bː
- ˌai
- ˈœy
- ˈøː
- ˈɑːɹ
- œ̃
- ˈæː
- au
- y-
- r̝̊
- ɵ
- ˌɵ
- c
- ˌɛɪ
- ˈɔø
- ˈyː
- ee
- pʲ
- ˈee
- bʲ
- ˈyø
- iə
- ˈiə
- ˌɨ
- ˌøː
- ɔːɹ
- ɔø
- eɪ
- ʑ
- ˈau
- ˈʊɹ
- r̝
- dʒː
- ˌeʊ
- ˈɔːɹ
- ˌoʊ
- ˌʊɹ
- ɑːɹ
- ˈæy
- ˌyː
- s^
- eu
- ˌə
- tʃː
- ˈə
- ˌei
- ea
- tsʲ
- ẽ
- ʌʊ
- œy
- ˈʌʊ
- nʲʲ
- ˌæi
- ˌʏ
- ˌɛː
- ˈɪɹ
- æi
- ˈɛɹ
- ˈæi
- ˈɔɪ
- ã
- dzː
- r̩
- ˈẽ
- ou
- œ
- ɜː
- uo
- tʲʲ
- ˌø
- ɛɹ
- ɭʲ
- iɪ
- (en)
- ʂʲ
- tsː
- ˌuo
- ˌʌʊ
- oːɹ
- ˈou
- ˌɛ̃
- ʝ
- eʊ
- ɨ̃
- ˈɔa
- ɟː
- ʊɐ
- ˈr̩
- tʃʲ
- uɪ
- ɡʲ
- ˈea
- ˌʊɐ
- ˈʊɐ
- ɛː
- ˌyi
- t^
- tɕʲ
- ˌea
- (fr)
- ɕʲ
- ʀ
- ˌɔø
- ʏ
- ˌœ
- ˈoɪ
- ˌau
- eɑ
- ˌɪː
- ˈeʊ
- ˈiɪ
- ˈã
- ˌɔː
- ˌã
- sʲ
- ˈaɪɚ
- ˌɑ̃
- ˌæː
- ey
- ˌœy
- ˈaɪə
- d̪
- ɾʲ
- ˌøi
- dː
- ˌie
- ui
- fʲ
- n̩
- ʔ
- ˌou
- yi
- ˌɑːɹ
- tsʲʲ
- ˌɐ
- ˈœ̃
- ˌyø
- dz
- ɡː
- ɾʲʲ
- ˈl̩
- ˈøy
- ˌæy
- cː
- æy
- ʊɹ
- ʑʲ
- ˌɜː
- yʊ
- ˌɛɹ
- pf
- dʑʲ
- ˌoːɹ
- ˈɨ̃
- ˈiʊ
- õ
- ɔa
- ˌɔa
- ˌee
- ˈĩ
- ˌiɪ
- ˌɔːɹ
- ˈɒ
- ja
- ĩ
- ˈũ
- ɒ
- ũ
- ʃʲ
- ɪɹ
- ju
- (de)
- yø
- ˌeu
- d^
- ˈiu
- ˈja
- øi
- ˈeɑ
- ˈyi
- ɾʲˌʲ
- ʃʲʲ
- ʃʲˌʲ
- aɪə
- ˈuɪ
- iu
- ˈõ
- iɐ
- ˌẽ
- iʊ
- ˌr̩
- ˈui
- əʊ
- u"
- ˌɔ̃
- ˈəʊ
- iy
- ʲ
- zʲˌʲ
- (it)
- ˌɒ
- ɔɪ
- ˌɪɹ
- ˈɵː
- ˈu"
- nʲˌʲ
- (nl)
- ˌl̩
- ˈey
- βː
- lʲʲ
- oɪ
- ˈiɐ
- ˌiɐ
- lʲ
- tsʲˌʲ
- xʲ
- ˌũ
- mʲʲ
- dʒʲ
- ˌeo
- ˈju
- r̩ː
- lʲˌʲ
- ˈøi
- t^ː
- əɪ
- l̩ː
- tʃˌʲ
- eo
- zʲʲ
- ˌiy
- aʲ
- ˌoɪ
- tl#
- ˈyɪ
- ˌiə
- ˌey
- øy
- dʲʲ
- ˈl̩ː
- ˈyʊ
- ˌɨ̃
- ʀʲ
- ɣː
- ˈeo
- ˈʊə
- ˌiu
- ˌøy
- ˈəɪ
- ˈeə
- aɪɚ
- ɪ^
- eə
- ˌĩ
- t̪
- vʲʲ
- (es)
- (gn)
- zʲ
- ˌõ
- əː
- bʲʲ
- (base)
- ˌəʊ
- ˈə-
- (ru)
- ˌɔɪ
- ˈæiː
- tsˌʲ
- ˈr̩ː
- ə--
- ˌn̩
- uʲ
- ˈw
- hʲ
- ˌeə
- yɪ
- fʲʲ
- ˌyʊ
- (el)
- ˌaɪɚ
- ˈəː
- ˌʊə
- ɵː
- t̪ː
- w-
- (sl)
- eʲ
- ˈa-
- ˌr̩ː
- mʲˌʲ
- (fi)
- ʒʲʲ
- çʲ
- ˌaɪə
- ˈɚ
- (lt)
- pʲʲ
- ˈɜ
- ˌuɪ
- ˌja
- (pl)
- ˈe-
- ˌe-
- (et)
- ˈoːʲ
- (kl)
- ˈõː
- (hu)
- ˈiy
- ʊə
- ˈaʲ
- ˌl̩ː
- lˌʲ
- '1'
- ʒʲ
- (cs)
- ˈææ
- ˈts-
- ts-
- ˌʊː
- ˌy"
- cʲ
- wʲ
- ˈãː
- ˈuʲ
- (ro)
- ˌɜ
- (sk)
- oːʲ
- ʊː
- ˈtl#tl#
- ʃˈʲ
- ɬ
- ˌə-
- (hr)
- tl#tl#
- ˌœ̃
- ˈʊː
- l̩ʲ
- dʒˌʲ
- tsˈʲ
- pʲˌʲ
- ˈʌː
- ˈeʲ
- aːʲ
- vʲˌʲ
- ˈj
- ()
- eːː
- ˌãː
- ˈuːʲ
- ˈeeʲ
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 16000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_stats_raw_phn_none/train/feats_stats.npz
tts: transformer
tts_conf:
embed_dim: 0
eprenet_conv_layers: 0
eprenet_conv_filts: 0
eprenet_conv_chans: 0
dprenet_layers: 2
dprenet_units: 256
adim: 512
aheads: 8
elayers: 6
eunits: 1024
dlayers: 6
dunits: 1024
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 1
postnet_layers: 5
postnet_filts: 5
postnet_chans: 256
spk_embed_dim: 192
spk_embed_integration_type: add
use_gst: true
gst_heads: 4
gst_tokens: 16
use_masking: true
bce_pos_weight: 5.0
use_scaled_pos_enc: true
encoder_normalize_before: true
decoder_normalize_before: true
reduction_factor: 1
init_type: xavier_uniform
init_enc_alpha: 1.0
init_dec_alpha: 1.0
eprenet_dropout_rate: 0.0
dprenet_dropout_rate: 0.5
postnet_dropout_rate: 0.5
transformer_enc_dropout_rate: 0.1
transformer_enc_positional_dropout_rate: 0.1
transformer_enc_attn_dropout_rate: 0.1
transformer_dec_dropout_rate: 0.1
transformer_dec_positional_dropout_rate: 0.1
transformer_dec_attn_dropout_rate: 0.1
transformer_enc_dec_attn_dropout_rate: 0.1
use_guided_attn_loss: true
num_heads_applied_guided_attn: 2
num_layers_applied_guided_attn: 2
modules_applied_guided_attn:
- encoder-decoder
guided_attn_loss_sigma: 0.4
guided_attn_loss_lambda: 10.0
langs: 21
lang_family_encoding: false
num_lang_family: 7
use_adapter: true
adapter_type: residual
use_encoder_w_lid: true
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202209'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
AmelieSchreiber/esm2_t6_8M_UR50D_LoRA_RNA-binding
|
AmelieSchreiber
| 2023-08-07T02:34:08Z | 4 | 1 |
peft
|
[
"peft",
"transformers",
"biology",
"esm",
"esm2",
"protein",
"protein language model",
"en",
"license:mit",
"region:us"
] | null | 2023-08-07T00:12:16Z |
---
library_name: peft
license: mit
language:
- en
tags:
- transformers
- biology
- esm
- esm2
- protein
- protein language model
---
# ESM-2 RNA Binding Site LoRA
This is a Parameter Efficient Fine Tuning (PEFT) Low Rank Adaptation (LoRA) of
the [esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D) model for the (binary) token classification task of
predicting RNA binding sites of proteins. The Github with the training script and conda env YAML can be
[found here](https://github.com/Amelie-Schreiber/esm2_LoRA_binding_sites/tree/main). You can also find a version of this model
that was fine-tuned without LoRA [here](https://huggingface.co/AmelieSchreiber/esm2_t6_8M_UR50D_rna_binding_site_predictor).
## Training procedure
This is a Low Rank Adaptation (LoRA) of `esm2_t6_8M_UR50D`,
trained on `166` protein sequences in the [RNA binding sites dataset](https://huggingface.co/datasets/AmelieSchreiber/data_of_protein-rna_binding_sites)
using a `75/25` train/test split. It achieves an evaluation loss of `0.1791934072971344`.
### Framework versions
- PEFT 0.4.0
## Using the Model
To use, try running:
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer
from peft import PeftModel
import torch
# Path to the saved LoRA model
model_path = "AmelieSchreiber/esm2_t6_8M_UR50D_LoRA_RNA-binding"
# ESM2 base model
base_model_path = "facebook/esm2_t6_8M_UR50D"
# Load the model
base_model = AutoModelForTokenClassification.from_pretrained(base_model_path)
loaded_model = PeftModel.from_pretrained(base_model, model_path)
# Ensure the model is in evaluation mode
loaded_model.eval()
# Load the tokenizer
loaded_tokenizer = AutoTokenizer.from_pretrained(base_model_path)
# Protein sequence for inference
protein_sequence = "MAVPETRPNHTIYINNLNEKIKKDELKKSLHAIFSRFGQILDILVSRSLKMRGQAFVIFKEVSSATNALRSMQGFPFYDKPMRIQYAKTDSDIIAKMKGT" # Replace with your actual sequence
# Tokenize the sequence
inputs = loaded_tokenizer(protein_sequence, return_tensors="pt", truncation=True, max_length=1024, padding='max_length')
# Run the model
with torch.no_grad():
logits = loaded_model(**inputs).logits
# Get predictions
tokens = loaded_tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) # Convert input ids back to tokens
predictions = torch.argmax(logits, dim=2)
# Define labels
id2label = {
0: "No binding site",
1: "Binding site"
}
# Print the predicted labels for each token
for token, prediction in zip(tokens, predictions[0].numpy()):
if token not in ['<pad>', '<cls>', '<eos>']:
print((token, id2label[prediction]))
```
|
dai1If/taxi-v3
|
dai1If
| 2023-08-07T02:33:34Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T02:33:30Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dai1If/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gwlms/t5-efficient-large-dewiki-v1
|
gwlms
| 2023-08-07T01:50:15Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"de",
"dataset:gwlms/dewiki-20230701-chunks",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-25T12:10:45Z |
---
license: mit
datasets:
- gwlms/dewiki-20230701-chunks
language:
- de
---
# German Wikipedia LMs
Coming soon!
# Acknowledgements
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
jhalljhall/ppo-Huggy
|
jhalljhall
| 2023-08-07T01:45:10Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-07T01:45:05Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jhalljhall/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Akemixzz/Jiwon
|
Akemixzz
| 2023-08-07T01:27:02Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-08-07T01:22:19Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optiīonal]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
taohoang/speecht5_finetuned_fleurs_en_us
|
taohoang
| 2023-08-07T01:18:29Z | 83 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:google/fleurs",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-08-07T01:04:34Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- google/fleurs
model-index:
- name: speecht5_finetuned_fleurs_en_us
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_fleurs_en_us
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the google/fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 54
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.719 | 0.33 | 9 | 0.5634 |
| 0.5994 | 0.67 | 18 | 0.5290 |
| 0.584 | 1.0 | 27 | 0.4924 |
| 0.5589 | 1.33 | 36 | 0.4828 |
| 0.5747 | 1.67 | 45 | 0.4848 |
| 0.5904 | 2.0 | 54 | 0.4831 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
nhat117/checkpoint-7500-dica-long-llama-3b-sft
|
nhat117
| 2023-08-07T01:04:56Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-01T07:55:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
brunoboat/Pixelcopter-PLE-v3
|
brunoboat
| 2023-08-07T00:42:31Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-07T00:42:27Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 43.20 +/- 35.30
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.