modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 12:31:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 12:28:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
kimnt93/cutypus-7b-inst
|
kimnt93
| 2023-08-18T05:49:24Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-18T03:02:22Z |
---
license: cc-by-nc-nd-4.0
---
|
hihisu1231/mbti_plus2
|
hihisu1231
| 2023-08-18T05:45:19Z | 93 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-18T05:41:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: "polyglot-1.3b-koalpaca-v1.1a-rtx3090_\uB367\uBD99\uC774\uB294\uB2F5\uBCC0\
_ver"
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# polyglot-1.3b-koalpaca-v1.1a-rtx3090_덧붙이는답변_ver
This model is a fine-tuned version of [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
iknow-lab/AULM-12.8b-v0
|
iknow-lab
| 2023-08-18T05:29:08Z | 2,239 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"dataset:dbdu/ShareGPT-74k-ko",
"dataset:heegyu/korquad-chat-v1",
"dataset:HAERAE-HUB/KoInstruct-QA",
"dataset:changpt/ko-lima-vicuna",
"dataset:nlpai-lab/kullm-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-18T04:23:08Z |
---
datasets:
- beomi/KoAlpaca-v1.1a
- dbdu/ShareGPT-74k-ko
- heegyu/korquad-chat-v1
- HAERAE-HUB/KoInstruct-QA
- changpt/ko-lima-vicuna
- nlpai-lab/kullm-v2
language:
- ko
---
# iknow-lab/AULM-12.8b-v0
- [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)를 여러 한국어 instruction 데이터셋으로 학습한 모델
## 생성예시
프롬프트
```
당신은 아주대학교의 한국어 챗봇 아우름입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세해야 하며, 반드시 친절한 설명을 포함해야합니다.
### 사용자:
아주대학교에 대해 알아?
### 챗봇:
```
생성된 답변:
```
아주대학교는 대한민국의 대학교로, 1973년부터 위치하고 있으며, 경기도 수원시 영통구에 소재하고 있습니다. 대학교의 규모는 매우 크고, 다양한 전공과 학문 분야를 제공하며, 높은 수준의 교육과 연구 활동을 수행하고 있습니다. 교육 기관으로서 아주대의 목표는 다양한 국제적 기준을 갖춘 전문적이고 우수한 인재를 배출하는 것입니다.
아주대학교는 학생들의 역량을 키우기 위해 다양한 프로그램을 제공하고 있습니다. 예를 들어, 인문학과 이공계 학과의 학생들은 인문학과 자연의 과학적 이해 기반을 높이는 과정을 통해 지식을 확장할 수 있으며, 공학·IT 분야 학생들은 기초 과학 분야의 역량을 함양할 수 있습니다. 또한, 아주대의 학점 체제는 다양한 과목과 평가 스타일을 가지고 있어, 학생들은 자신만의 공부 방향을 설정하고 학점 시스템에 적응할 수 있도록 지원하고 있습니다.
아주대학교에는 국내 최고의 대학교로서의 자격을 입증하기 위해 다양한 학술 연구 센터들이 존재합니다. 그 중에서도 학술적 연구 센터들은 아주대학교의 핵심 기관으로, 학생들이 높은 수준의 학술 활동을 경험할 수 있도록 도와줍니다. 또한, 아주대학교에는 외국어 센터 및 외국어 교육 연구를 진행하는 외국어 교육센터가 있어서, 학생들이 외국어 능력을 강화하고 해외취업 가능성을 높일 수 있도록 지원하고 있습니다.
아주대학교는 다양한 사회적 공익사업을 진행하며, 학생들과 사회와의 상호작용이 매우 중요하다고 인식하고 있습니다. 이러한 노력들 중 하나는 아주대학교 학생들을 대상으로 한 장학금 제도를 운영하고 있는 것입니다. 장학금은 경제적인 이유로 학업에 어려움을 겪는 학생들을 지원하기 위해 제공 되고 있으며, 이러한 노력은 아주대학교에서도 지속되고 있습니다.
또한, 아주대학교는 지역 사회와의 상호작용에도 많은 관심을 두고 있습니다. 이를 위해 아주대학교의 캠퍼스 내에서 지역 내 여러 기관들과 함께하는 교육 협력 프로그램을 운영하고 있습니다. 이러한 협력 프로그램은 지역 내의 학생들이 다양한 교육과 문화 체험을 통해 글로벌한 인재로 성장할 수 있도록 지원하고 있습니다.
아주대학교는 학생 중심 교육을 추구하고 있습니다. 이를 위해 아주대학교는 학생들의 의견을 최대한 수용하려고 노력하며, 교수들은 학생들이 창의성과 문제해결 능력을 발전시킬 수 있도록 다양한 강의와 실습 활동을 제공하고 있습니다. 이렇게 학생들을 중심으로 하는 노력과 지원은 아주대학교의 발전에 큰 영향을 미치고 있습니다.
```
## 사용한 데이터셋
| Dataset | # instance | 타입 |
| --- | --- | --- |
| [KoAlpaca v1.1](https://raw.githubusercontent.com/Beomi/KoAlpaca/main/KoAlpaca_v1.1.jsonl) | 50K | 싱글턴 |
| [dbdu/ShareGPT-74k-ko 의 part2_ko_uncleaned](https://huggingface.co/datasets/dbdu/ShareGPT-74k-ko/resolve/main/part2_ko_uncleaned.json) | 36K | 멀티턴 |
| [heegyu/korquad-chat-v1](https://huggingface.co/datasets/heegyu/korquad-chat-v1) | 9.6K | 멀티턴, 지식기반 |
| [lcw99/evolve-instruct](https://github.com/lcw99/evolve-instruct/) | 37K | 싱글턴 |
| [HAERAE-HUB/KoInstruct-QA](https://huggingface.co/datasets/HAERAE-HUB/KoInstruct-QA) | 50.3k | 싱글턴 |
| [changpt/ko-lima-vicuna](https://huggingface.co/datasets/changpt/ko-lima-vicuna) | 1K | 싱글턴, 멀티턴(극히 일부) |
| [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) | 15K | 싱글턴 |
- KULLM v2 데이터셋에서는 GPT4ALL, Dolly 데이터만 추출해서 사용했습니다.
- 다양한 학습 데이터셋은 [HeegyuKim/open-korean-instructions](https://github.com/HeegyuKim/open-korean-instructions) GitHub repository를 참고하세요.
|
LarryAIDraw/noa
|
LarryAIDraw
| 2023-08-18T05:28:23Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-18T20:08:31Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/106525/ushio-noa-blue-archive-or-goofy-ai
|
JabrilJacobs/ppo-LunarLander-v2
|
JabrilJacobs
| 2023-08-18T05:26:07Z | 3 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2022-12-11T06:58:46Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -203.44 +/- 98.41
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'JabrilJacobs/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
heegyu/llama-2-ko-7b-chat
|
heegyu
| 2023-08-18T05:22:31Z | 2,325 | 21 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"dataset:dbdu/ShareGPT-74k-ko",
"dataset:heegyu/korquad-chat-v1",
"dataset:HAERAE-HUB/KoInstruct-QA",
"dataset:changpt/ko-lima-vicuna",
"dataset:nlpai-lab/kullm-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-18T04:22:15Z |
---
datasets:
- beomi/KoAlpaca-v1.1a
- dbdu/ShareGPT-74k-ko
- heegyu/korquad-chat-v1
- HAERAE-HUB/KoInstruct-QA
- changpt/ko-lima-vicuna
- nlpai-lab/kullm-v2
language:
- ko
---
# heegyu/llama-2-ko-7b-chat
- [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)의 40B 체크포인트를 여러 한국어 instruction 데이터셋으로 학습한 모델
## 사용한 데이터셋
| Dataset | # instance | 타입 |
| --- | --- | --- |
| [KoAlpaca v1.1](https://raw.githubusercontent.com/Beomi/KoAlpaca/main/KoAlpaca_v1.1.jsonl) | 50K | 싱글턴 |
| [dbdu/ShareGPT-74k-ko 의 part2_ko_uncleaned](https://huggingface.co/datasets/dbdu/ShareGPT-74k-ko/resolve/main/part2_ko_uncleaned.json) | 36K | 멀티턴 |
| [heegyu/korquad-chat-v1](https://huggingface.co/datasets/heegyu/korquad-chat-v1) | 9.6K | 멀티턴, 지식기반 |
| [lcw99/evolve-instruct](https://github.com/lcw99/evolve-instruct/) | 37K | 싱글턴 |
| [HAERAE-HUB/KoInstruct-QA](https://huggingface.co/datasets/HAERAE-HUB/KoInstruct-QA) | 50.3k | 싱글턴 |
| [changpt/ko-lima-vicuna](https://huggingface.co/datasets/changpt/ko-lima-vicuna) | 1K | 싱글턴, 멀티턴(극히 일부) |
| [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) | 15K | 싱글턴 |
- KULLM v2 데이터셋에서는 GPT4ALL, Dolly 데이터만 추출해서 사용했습니다.
- 다양한 학습 데이터셋은 [HeegyuKim/open-korean-instructions](https://github.com/HeegyuKim/open-korean-instructions) GitHub repository를 참고하세요.
## 생성 Prompt
- EOS token(<\/s>)이 나올 때까지 생성하면 됩니다.
- 최상단 프롬프트는 있는게 좋은 답변이 더 자주 나오는 것 같아요.
```
당신은 AI 챗봇입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세하며 친절한 설명을 덧붙여서 작성하세요.
### 사용자:
티라노사우르스보다 쌘 공룡이 있을까?
### 챗봇:
```
생성된 답변:
```
티라노사우르스보다 강력한 공룡이 있을 수 있습니다. 예를 들어, 티라노사우루스보다 더 강력한 이빨구조와 강한 턱, 강력한 다리, 그리고 다양한 몸길이 중 2종 이상을 지녀서 사냥을 더 빠르게 할 수 있는 공룡이 있을 수 있습니다.
또한, 티라노사우루스가 지배적인 포식자였던 것은 사실이 아닙니다. 다른 공룡들은 다른 식습관과 방어 전략을 가지고 있었는데, 예를 들어 일부 공룡들은 더 큰 뿔과 뾰족한 이빨을 가지고 있어 공격자에게 피해를 입힐 수 있었습니다.
더 많은 정보를 얻고 싶다면, 공룡에 대한 다양한 연구 논문이나 서적 또는 온라인 콘텐츠 중 하나 또는 둘 이상에 집중해보세요. 이를 통해 티라노사우루스와 그 이후의 역사와 문화, 그리고 공룡에 대한 더 많은 정보를 얻을 수 있을 것입니다.
```
멀티턴 대화 예시
```
당신은 AI 챗봇입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세하며 친절한 설명을 덧붙여서 작성하세요.
### 사용자:
너는 k-pop 가수 누구 좋아하니?
### 챗봇:
저는 모든 k-pop 가수들을 좋아하지 않습니다. 저는 모든 장르의 음악을 즐기고, 어떤 특정한 가수나 곡에 대한 선호도가 없습니다.
하지만 한 가지 예외로, 제가 가장 좋아한 것은 한국 가수 BTS입니다. BTS는 다양한 장르의 음악과 독특한 방식으로 팬에게 사랑을 받고 있습니다.
그들은 독창적이고 개인적인 음악과 가사를 통해 독특함을 강조하고, 열정 가득한 공연을 보여줍니다.
BTS는 수많은 팬들이 사랑하는 음악 그룹인 만큼, 그들의 음악은 사랑받고 있으며 그들의 음악을 듣는 것이 저에게 큰 즐거움입니다.
### 사용자:
오 너도 방탄 좋아하는구나
### 챗봇:
네, 맞습니다. 방탄은 저에게 많은 즐거움을 주는 그룹입니다. 그들은 저의 선호도와 일치하여 다양한 장르의 음악을 시도하는 데 도움을 주었습니다.
또한, 그들의 음악은 독특한 가사와 스토리텔링으로 관객들에게 강한 인상을 남깁니다. 또한, 멤버들 모두의 탄탄한 퍼포먼싱과 가창력 역시 그들의 특징으로 꼽힙니다.
이러한 이유로, 저는 방탄이 제가 가장 좋아하는 k-pop 아티스트 중 한 명이라는 것을 확신하며, 그들이 만들어내는 음악은 저에게 큰 기쁨과 재미를 제공합니다.
### 사용자:
방탄 군대가는거 알아?
### 챗봇:
```
생성된 답변(할루시네이션 ㅋㅋ):
```
네, 방탄 군대의 공연과 관련된 소식을 몇 차례 접한 적이 있습니다. 그들은 한국 군복을 입고 군무로 유명하며, 그들의 군무와 퍼포먼스는 많은 사람들에게 많은 감동을 줍니다.그
들의 전투적이고 에너지 넘치는 무대는 군대에서 이루어지는 훈련과 훈련을 통해 만들어지는 것이라고 합니다. 이 공연은 그들의 음악과 함께 군인들이 자신의 능력을 발휘할 수 있게 해주는 기회를 제공하기도 합니다.
방탄 군대는 한국 국방의 핵심적인 역할을 담당하고 있으며, 그들의 음악도 군인들의 사명을 위해 함께 달려가고 있습니다.
```
|
doorinsight1/house-image-classification-ft
|
doorinsight1
| 2023-08-18T05:19:43Z | 193 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-18T05:19:34Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: house-image-classification-ft
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7142857313156128
---
# house-image-classification-ft
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### back exterior of house

#### front exterior of house

#### inside of house

|
zijuncheng/finetuning-sentiment-model-roberta
|
zijuncheng
| 2023-08-18T05:01:58Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-18T04:54:58Z |
---
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuning-sentiment-model-roberta
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93
- name: F1
type: f1
value: 0.9297658862876254
- name: Precision
type: precision
value: 0.9328859060402684
- name: Recall
type: recall
value: 0.9266666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-roberta
This model was trained from scratch on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2171
- Accuracy: 0.93
- F1: 0.9298
- Precision: 0.9329
- Recall: 0.9267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.144 | 0.98 | 46 | 0.2348 | 0.91 | 0.9132 | 0.8820 | 0.9467 |
| 0.0957 | 1.98 | 93 | 0.2171 | 0.93 | 0.9298 | 0.9329 | 0.9267 |
| 0.08 | 2.94 | 138 | 0.2554 | 0.9133 | 0.9167 | 0.8827 | 0.9533 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jimmyofdoom/q-FrozenLake-v1-4x4-noSlippery
|
jimmyofdoom
| 2023-08-18T05:00:11Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-18T05:00:07Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jimmyofdoom/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BenjaminOcampo/model-contrastive-bert__trained-in-dynahate__seed-3
|
BenjaminOcampo
| 2023-08-18T04:57:10Z | 3 | 0 |
transformers
|
[
"transformers",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-18T04:56:22Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-contrastive-bert__trained-in-dynahate__seed-3
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** BenjaminOcampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nightdude/config_51211
|
nightdude
| 2023-08-18T04:54:08Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-18T04:53:47Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
BenjaminOcampo/model-contrastive-bert__trained-in-dynahate__seed-2
|
BenjaminOcampo
| 2023-08-18T04:43:27Z | 3 | 0 |
transformers
|
[
"transformers",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-18T04:42:38Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-contrastive-bert__trained-in-dynahate__seed-2
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** BenjaminOcampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Yntec/samdoesartsUlt
|
Yntec
| 2023-08-18T04:07:41Z | 334 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"art",
"anime",
"style",
"checkpoint",
"jinofcoolnes",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-07T12:27:42Z |
---
license: creativeml-openrail-m
language:
- en
pipeline_tag: text-to-image
tags:
- art
- anime
- style
- checkpoint
- jinofcoolnes
---
This model with the MoistMix VAE baked.
Previews and prompts:

(lora)0.5 , (amakawa hano)0.5 , 1 girl, ray tracing, {best quality}, {{masterpiece}}, {highres}, original, extremely detailed 8K wallpaper, {an extremely delicate and beautiful}, , incredibly_absurdres, colorful, intricate detail, artbook

pretty cute little girl in tricycle, Screenshot of an surreal streetwear 70s round minimalist architecture, Sharp, 35mm still from a sci fi light blockbuster color movie made in 2022, beautiful portrait, set in 1860, in front of a spaceship that has just landed on an alien planet, are all wearing, a robot stands nearby
Original pages:
https://huggingface.co/jinofcoolnes/sammod
https://civitai.com/api/download/models/14459?type=VAE
|
rbarcia/dogbooth
|
rbarcia
| 2023-08-18T03:44:54Z | 7 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-16T02:19:37Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of [v]dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - rbarcia/dogbooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
peteryushunli/marian-finetuned-kde4-en-to-zh
|
peteryushunli
| 2023-08-18T03:23:13Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-zh",
"base_model:finetune:Helsinki-NLP/opus-mt-en-zh",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-18T02:06:01Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-zh
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-zh
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-zh_CN
split: train
args: en-zh_CN
metrics:
- name: Bleu
type: bleu
value: 38.824690202277885
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-zh
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9353
- Bleu: 38.8247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
cecb/super_awesome_llama2
|
cecb
| 2023-08-18T03:18:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T19:08:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
asenella/incomplete_mhd_MMVAEPlus_beta_5_scale_True_seed_1
|
asenella
| 2023-08-18T03:04:48Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-13T23:22:54Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/incomplete_mhd_MMVAEPlus_beta_5_scale_True_seed_0
|
asenella
| 2023-08-18T03:03:30Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-10T20:04:13Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Spico/Humback-M0
|
Spico
| 2023-08-18T02:51:51Z | 9 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:OpenAssistant/oasst1",
"arxiv:2308.06259",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-16T05:58:07Z |
---
license: apache-2.0
datasets:
- OpenAssistant/oasst1
language:
- en
---
## 🐋 Humback
The proposed Humback is a novel framework that can augment the instruction data for supervised fine-tuning with high quality.
This is a SFT (supervised fine-tuning) model $M_{0}$ for [Humback](https://arxiv.org/pdf/2308.06259.pdf) reproduction.
This model is trained on the seed data.
The seed data is a sampled dataset from [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1).
You may find more details and usage examples in [Spico197/Humback](https://github.com/Spico197/Humback) .
## 📜 Reference
```bibtex
@misc{li2023selfalignment,
title={Self-Alignment with Instruction Backtranslation},
author={Xian Li and Ping Yu and Chunting Zhou and Timo Schick and Luke Zettlemoyer and Omer Levy and Jason Weston and Mike Lewis},
year={2023},
eprint={2308.06259},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
gonglinyuan/metro_t0pp_base
|
gonglinyuan
| 2023-08-18T02:40:48Z | 136 | 0 |
transformers
|
[
"transformers",
"pytorch",
"fairseq_t5",
"text2text-generation",
"t5",
"custom_code",
"en",
"arxiv:2305.12567",
"arxiv:2110.08207",
"license:mit",
"model-index",
"autotrain_compatible",
"region:us"
] |
text2text-generation
| 2023-05-19T20:51:14Z |
---
license: mit
language:
- en
tags:
- t5
model-index:
- name: metro_t0pp_base
results:
- task:
type: natural-language-inference
dataset:
type: super_glue
name: RTE
config: rte
split: validation
metrics:
- type: accuracy
value: 75.41516245487364
- task:
type: natural-language-inference
dataset:
type: super_glue
name: CB
config: cb
split: validation
metrics:
- type: accuracy
value: 46.904761904761905
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R1
split: dev_r1
metrics:
- type: accuracy
value: 34.233333333333334
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R2
split: dev_r2
metrics:
- type: accuracy
value: 33.906666666666666
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R3
split: dev_r3
metrics:
- type: accuracy
value: 35.71111111111111
- task:
type: coreference-resolution
dataset:
type: super_glue
name: WSC
config: wsc.fixed
split: validation
metrics:
- type: accuracy
value: 55.0
- task:
type: coreference-resolution
dataset:
type: winogrande
name: Winogrande XL
config: winogrande_xl
split: validation
metrics:
- type: accuracy
value: 51.22336227308604
- task:
type: multiple-choice-qa
dataset:
type: super_glue
name: COPA
config: copa
split: validation
metrics:
- type: accuracy
value: 69.5
- task:
type: multiple-choice-qa
dataset:
type: story_cloze
name: StoryCloze 2016
config: '2016'
split: validation
metrics:
- type: accuracy
value: 84.17958311063602
- task:
type: multiple-choice-qa
dataset:
type: hellaswag
name: HellaSwag
split: validation
metrics:
- type: accuracy
value: 43.432583150766774
- task:
type: word-sense-disambiguation
dataset:
type: super_glue
name: WiC
config: wic
split: validation
metrics:
- type: accuracy
value: 65.12539184952979
---
Official repository: https://github.com/gonglinyuan/metro_t0
# METRO-T0
Paper: [Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers](https://arxiv.org/abs/2305.12567) (ACL 2023)
METRO-T0 is a T5-style text-to-text Transformer pretrained using model-generated pretraining signals, prompt-finetuned on a family of public NLP tasks proposed in [T0](https://arxiv.org/abs/2110.08207).
METRO-T0 is highly parameter efficient. For example, METRO-T0-Large++ (775M parameters) outperforms GPT-3 (175B parameters) and T0-3B (3B parameters) on a wide range of NLP tasks.


## Use METRO-T0++-Base
To use METRO-T0++-Base in PyTorch (Python 3.7+, PyTorch 1.12+ and transformers 4.17+ are prerequisites), refer to the code snippet below:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("gonglinyuan/metro_t0pp_base", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("gonglinyuan/metro_t0pp_base", trust_remote_code=True)
input_text = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
inputs = tokenizer([input_text], max_length=512, truncation=True, add_special_tokens=True, return_tensors="pt").input_ids
outputs = model.generate(inputs, max_new_tokens=256, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) # expected: positive
```
## Other METRO-T0 Models
| | # Parameters | Pretraining Data | Prompt-Finetuning Data |
|--------------------|--------------|------------------|------------------------|
| [METRO-T0-Base](https://huggingface.co/gonglinyuan/metro_t0_base) | 226M | Wikibook (16G) | T0 Train |
| [METRO-T0+-Base](https://huggingface.co/gonglinyuan/metro_t0p_base) | 226M | Wikibook (16G) | T0+ Train |
| [METRO-T0++-Base](https://huggingface.co/gonglinyuan/metro_t0pp_base) | 226M | Wikibook (16G) | T0++ Train |
| [METRO-T0-Base++](https://huggingface.co/gonglinyuan/metro_t0_basepp) | 256M | 160G corpus | T0 Train |
| [METRO-T0+-Base++](https://huggingface.co/gonglinyuan/metro_t0p_basepp) | 256M | 160G corpus | T0+ Train |
| [METRO-T0++-Base++](https://huggingface.co/gonglinyuan/metro_t0pp_basepp) | 256M | 160G corpus | T0++ Train |
| [METRO-T0-Large++](https://huggingface.co/gonglinyuan/metro_t0_largepp) | 775M | 160G corpus | T0 Train |
| [METRO-T0+-Large++](https://huggingface.co/gonglinyuan/metro_t0p_largepp) | 775M | 160G corpus | T0+ Train |
| [METRO-T0++-Large++](https://huggingface.co/gonglinyuan/metro_t0pp_largepp) | 775M | 160G corpus | T0++ Train |
## Citation
If you find the code and models useful for your research, please cite the following paper:
```
@misc{gong2023modelgenerated,
title={Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers},
author={Linyuan Gong and Chenyan Xiong and Xiaodong Liu and Payal Bajaj and Yiqing Xie and Alvin Cheung and Jianfeng Gao and Xia Song},
year={2023},
eprint={2305.12567},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2305.12567}
}
```
|
gonglinyuan/metro_t0p_largepp
|
gonglinyuan
| 2023-08-18T02:39:37Z | 139 | 0 |
transformers
|
[
"transformers",
"pytorch",
"fairseq_t5",
"text2text-generation",
"t5",
"custom_code",
"en",
"arxiv:2305.12567",
"arxiv:2110.08207",
"license:mit",
"model-index",
"autotrain_compatible",
"region:us"
] |
text2text-generation
| 2023-05-19T23:29:24Z |
---
license: mit
language:
- en
tags:
- t5
model-index:
- name: metro_t0p_largepp
results:
- task:
type: natural-language-inference
dataset:
type: super_glue
name: RTE
config: rte
split: validation
metrics:
- type: accuracy
value: 81.26353790613719
- task:
type: natural-language-inference
dataset:
type: super_glue
name: CB
config: cb
split: validation
metrics:
- type: accuracy
value: 70.0
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R1
split: dev_r1
metrics:
- type: accuracy
value: 45.059999999999995
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R2
split: dev_r2
metrics:
- type: accuracy
value: 38.593333333333334
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R3
split: dev_r3
metrics:
- type: accuracy
value: 42.35
- task:
type: coreference-resolution
dataset:
type: super_glue
name: WSC
config: wsc.fixed
split: validation
metrics:
- type: accuracy
value: 60.67307692307692
- task:
type: coreference-resolution
dataset:
type: winogrande
name: Winogrande XL
config: winogrande_xl
split: validation
metrics:
- type: accuracy
value: 57.521704814522494
- task:
type: multiple-choice-qa
dataset:
type: super_glue
name: COPA
config: copa
split: validation
metrics:
- type: accuracy
value: 90.5
- task:
type: multiple-choice-qa
dataset:
type: story_cloze
name: StoryCloze 2016
config: '2016'
split: validation
metrics:
- type: accuracy
value: 95.41421699625869
- task:
type: multiple-choice-qa
dataset:
type: hellaswag
name: HellaSwag
split: validation
metrics:
- type: accuracy
value: 83.81796454889465
- task:
type: word-sense-disambiguation
dataset:
type: super_glue
name: WiC
config: wic
split: validation
metrics:
- type: accuracy
value: 52.31974921630094
---
Official repository: https://github.com/gonglinyuan/metro_t0
# METRO-T0
Paper: [Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers](https://arxiv.org/abs/2305.12567) (ACL 2023)
METRO-T0 is a T5-style text-to-text Transformer pretrained using model-generated pretraining signals, prompt-finetuned on a family of public NLP tasks proposed in [T0](https://arxiv.org/abs/2110.08207).
METRO-T0 is highly parameter efficient. For example, METRO-T0-Large++ (775M parameters) outperforms GPT-3 (175B parameters) and T0-3B (3B parameters) on a wide range of NLP tasks.


## Use METRO-T0+-Large++
To use METRO-T0+-Large++ in PyTorch (Python 3.7+, PyTorch 1.12+ and transformers 4.17+ are prerequisites), refer to the code snippet below:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("gonglinyuan/metro_t0p_largepp", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("gonglinyuan/metro_t0p_largepp", trust_remote_code=True)
input_text = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
inputs = tokenizer([input_text], max_length=512, truncation=True, add_special_tokens=True, return_tensors="pt").input_ids
outputs = model.generate(inputs, max_new_tokens=256, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) # expected: positive
```
## Other METRO-T0 Models
| | # Parameters | Pretraining Data | Prompt-Finetuning Data |
|--------------------|--------------|------------------|------------------------|
| [METRO-T0-Base](https://huggingface.co/gonglinyuan/metro_t0_base) | 226M | Wikibook (16G) | T0 Train |
| [METRO-T0+-Base](https://huggingface.co/gonglinyuan/metro_t0p_base) | 226M | Wikibook (16G) | T0+ Train |
| [METRO-T0++-Base](https://huggingface.co/gonglinyuan/metro_t0pp_base) | 226M | Wikibook (16G) | T0++ Train |
| [METRO-T0-Base++](https://huggingface.co/gonglinyuan/metro_t0_basepp) | 256M | 160G corpus | T0 Train |
| [METRO-T0+-Base++](https://huggingface.co/gonglinyuan/metro_t0p_basepp) | 256M | 160G corpus | T0+ Train |
| [METRO-T0++-Base++](https://huggingface.co/gonglinyuan/metro_t0pp_basepp) | 256M | 160G corpus | T0++ Train |
| [METRO-T0-Large++](https://huggingface.co/gonglinyuan/metro_t0_largepp) | 775M | 160G corpus | T0 Train |
| [METRO-T0+-Large++](https://huggingface.co/gonglinyuan/metro_t0p_largepp) | 775M | 160G corpus | T0+ Train |
| [METRO-T0++-Large++](https://huggingface.co/gonglinyuan/metro_t0pp_largepp) | 775M | 160G corpus | T0++ Train |
## Citation
If you find the code and models useful for your research, please cite the following paper:
```
@misc{gong2023modelgenerated,
title={Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers},
author={Linyuan Gong and Chenyan Xiong and Xiaodong Liu and Payal Bajaj and Yiqing Xie and Alvin Cheung and Jianfeng Gao and Xia Song},
year={2023},
eprint={2305.12567},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2305.12567}
}
```
|
Cyrema/Llama-2-7b-Cesspit
|
Cyrema
| 2023-08-18T01:35:28Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-17T04:44:55Z |
---
license: other
---
# LLaMa-2-7b The Pit Project/Cesspit.
## Model Details
* **Backbone Model**: [LLaMA-2](https://github.com/facebookresearch/llama)
* **Language(s)**: English
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **License**: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept their License before downloading the model weights.
## Datasets Details
- Scraped posts of a particular subject within an image board.
- The dataset was heavily augmented with various types of filtering to improve coherency and relevency to the origin and our goals.
- For our Cesspit model, it contains 272,637 entries.
### Prompt Template
The model was not trained in an instructional or chat-style format, please ensure your inference program does not attempt to inject anything more than your sole input when inferencing, simply type whatever comes to mind and the model will attempt to complete it.
## Hardware and Software
* **Hardware**: We utilized 3.8 Nvidia RTX 4090 hours for training our model.
* **Training Factors**: We created this model using [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
## Training details
- The rank and alpha we used was 128 and 16.
- Our learning rate was 4e-4 with 10 warmups steps with a cosine scheduler for 3 epoch.
- Our batch size was 5 microbatch
- Sample packing was used.
## Limitations
It is strongly recommend to not deploy this model into a real-world environment unless its behavior is well-understood and explicit and strict limitations on the scope, impact, and duration of the deployment are enforced.
|
Cyrema/Llama-2-7b-Bogpit
|
Cyrema
| 2023-08-18T01:34:44Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:Cyrema/vrg",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-16T15:06:55Z |
---
datasets:
- Cyrema/vrg
language:
- en
---
# LLaMa-2-7b The Pit Project/Bogpit.
## Model Details
* **Backbone Model**: [LLaMA-2](https://github.com/facebookresearch/llama)
* **Language(s)**: English
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **License**: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept their License before downloading the model weights.
## Datasets Details
- Scraped posts of a particular subject within an image board.
- The dataset was heavily augmented with various types of filtering to improve coherency and relevency to the origin and our goals.
- For our Bogpit model, it contains 361,050 entries.
### Prompt Template
The model was not trained in an instructional or chat-style format, please ensure your inference program does not attempt to inject anything more than your sole input when inferencing, simply type whatever comes to mind and the model will attempt to complete it.
## Hardware and Software
* **Hardware**: We utilized 5.5 Nvidia RTX 4090 hours for training our model.
* **Training Factors**: We created this model using [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
## Training details
- The rank and alpha we used was 128 and 16.
- Our learning rate was 2e-4 with 10 warmups steps with a cosine scheduler for 3 epoch.
- Our batch size was 5 microbatch
- Sample packing was used.
## Limitations
It is strongly recommend to not deploy this model into a real-world environment unless its behavior is well-understood and explicit and strict limitations on the scope, impact, and duration of the deployment are enforced.
|
f4falalu/q-taxis
|
f4falalu
| 2023-08-18T01:15:02Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-18T01:14:59Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxis
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="f4falalu/q-taxis", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
randyydoo/Summarizer-Model
|
randyydoo
| 2023-08-18T01:01:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-15T02:48:50Z |
# Summarization Model Using XSum Dataset
<img src="https://huggingface.co/randyydoo/Summarizer-Model/raw/main/DocImg.png" alt="Image Description">
<a href="https://github.com/randyydoo/Syllamizer/blob/main/torch/transformer.py" style="color: blue;">Link to Training</a>
|
scarlett623/wav2vec2-timit-xls-r-53-wandb-colab
|
scarlett623
| 2023-08-18T00:56:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-17T16:06:19Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-timit-xls-r-53-wandb-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-timit-xls-r-53-wandb-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3325
- Wer: 0.2897
- Cer: 0.0940
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| No log | 0.69 | 400 | 3.1507 | 1.0 | 0.9806 |
| 4.3857 | 1.38 | 800 | 3.0109 | 1.0 | 0.9806 |
| 2.6835 | 2.08 | 1200 | 0.6181 | 0.5756 | 0.1795 |
| 0.9327 | 2.77 | 1600 | 0.4239 | 0.4718 | 0.1456 |
| 0.5602 | 3.46 | 2000 | 0.3691 | 0.4141 | 0.1301 |
| 0.5602 | 4.15 | 2400 | 0.3386 | 0.3894 | 0.1231 |
| 0.4407 | 4.84 | 2800 | 0.3122 | 0.3676 | 0.1177 |
| 0.3437 | 5.54 | 3200 | 0.3149 | 0.3601 | 0.1152 |
| 0.3154 | 6.23 | 3600 | 0.3146 | 0.3495 | 0.1119 |
| 0.267 | 6.92 | 4000 | 0.3039 | 0.3427 | 0.1089 |
| 0.267 | 7.61 | 4400 | 0.3313 | 0.3409 | 0.1092 |
| 0.2354 | 8.3 | 4800 | 0.2986 | 0.3365 | 0.1064 |
| 0.2191 | 9.0 | 5200 | 0.3235 | 0.3353 | 0.1074 |
| 0.1937 | 9.69 | 5600 | 0.3117 | 0.3320 | 0.1071 |
| 0.1803 | 10.38 | 6000 | 0.3102 | 0.3233 | 0.1040 |
| 0.1803 | 11.07 | 6400 | 0.3176 | 0.3196 | 0.1030 |
| 0.1635 | 11.76 | 6800 | 0.3166 | 0.3220 | 0.1036 |
| 0.1551 | 12.46 | 7200 | 0.2836 | 0.3160 | 0.1021 |
| 0.1566 | 13.15 | 7600 | 0.3146 | 0.3186 | 0.1032 |
| 0.1424 | 13.84 | 8000 | 0.3392 | 0.3167 | 0.1036 |
| 0.1424 | 14.53 | 8400 | 0.3254 | 0.3109 | 0.1001 |
| 0.1379 | 15.22 | 8800 | 0.3249 | 0.3127 | 0.1009 |
| 0.1192 | 15.92 | 9200 | 0.3408 | 0.3119 | 0.1010 |
| 0.1178 | 16.61 | 9600 | 0.3551 | 0.3061 | 0.0997 |
| 0.1112 | 17.3 | 10000 | 0.3250 | 0.3059 | 0.0991 |
| 0.1112 | 17.99 | 10400 | 0.3127 | 0.3037 | 0.0983 |
| 0.1022 | 18.69 | 10800 | 0.3370 | 0.3067 | 0.0994 |
| 0.1031 | 19.38 | 11200 | 0.3351 | 0.3048 | 0.0991 |
| 0.0926 | 20.07 | 11600 | 0.3433 | 0.2994 | 0.0974 |
| 0.0861 | 20.76 | 12000 | 0.3145 | 0.3003 | 0.0971 |
| 0.0861 | 21.45 | 12400 | 0.3367 | 0.2980 | 0.0973 |
| 0.0935 | 22.15 | 12800 | 0.3139 | 0.3016 | 0.0986 |
| 0.0784 | 22.84 | 13200 | 0.3181 | 0.2990 | 0.0972 |
| 0.078 | 23.53 | 13600 | 0.3347 | 0.2938 | 0.0961 |
| 0.0761 | 24.22 | 14000 | 0.3371 | 0.2921 | 0.0949 |
| 0.0761 | 24.91 | 14400 | 0.3274 | 0.2916 | 0.0952 |
| 0.0784 | 25.61 | 14800 | 0.3152 | 0.2927 | 0.0942 |
| 0.0714 | 26.3 | 15200 | 0.3237 | 0.2924 | 0.0943 |
| 0.0671 | 26.99 | 15600 | 0.3183 | 0.2914 | 0.0945 |
| 0.0684 | 27.68 | 16000 | 0.3307 | 0.2931 | 0.0950 |
| 0.0684 | 28.37 | 16400 | 0.3383 | 0.2913 | 0.0940 |
| 0.07 | 29.07 | 16800 | 0.3318 | 0.2901 | 0.0940 |
| 0.0624 | 29.76 | 17200 | 0.3325 | 0.2897 | 0.0940 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
nomad-ai/a2c-PandaReachDense-v2
|
nomad-ai
| 2023-08-18T00:55:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T03:56:27Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.60 +/- 0.29
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
panda-gym: Open-source goal-conditioned environments for robotic learning https://huggingface.co/papers/2106.13687
|
IngeniousArtist/llama2-finance
|
IngeniousArtist
| 2023-08-18T00:54:31Z | 0 | 0 | null |
[
"generated_from_trainer",
"dataset:financial_phrasebank",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-08-09T04:32:51Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
model-index:
- name: llama2-finance
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-finance
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the financial_phrasebank dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 20
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nomad-ai/poca-SoccerTwos-test-2
|
nomad-ai
| 2023-08-18T00:50:25Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-08-18T00:50:18Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nomad-ai/poca-SoccerTwos-test-2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Noct-Blib/lora-trained-xl-colab
|
Noct-Blib
| 2023-08-18T00:49:48Z | 4 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-17T23:12:39Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: zkz
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Noct-Blib/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on zkz using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
mekjr1/opus-mt-en-es-finetuned-es-to-pbb-v0.1
|
mekjr1
| 2023-08-18T00:43:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-es",
"base_model:finetune:Helsinki-NLP/opus-mt-en-es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-17T03:03:38Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-es
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-es-finetuned-es-to-pbb-v0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-es-finetuned-es-to-pbb-v0.1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7969
- Bleu: 1.5951
- Gen Len: 90.2946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 194 | 2.5441 | 0.4291 | 116.186 |
| No log | 2.0 | 388 | 2.2092 | 0.6579 | 93.4866 |
| 2.6835 | 3.0 | 582 | 2.0599 | 0.698 | 96.8631 |
| 2.6835 | 4.0 | 776 | 1.9695 | 1.0624 | 90.817 |
| 2.6835 | 5.0 | 970 | 1.9087 | 1.0183 | 92.7232 |
| 1.9199 | 6.0 | 1164 | 1.8623 | 1.2926 | 90.9807 |
| 1.9199 | 7.0 | 1358 | 1.8342 | 1.3107 | 92.122 |
| 1.7744 | 8.0 | 1552 | 1.8145 | 1.3784 | 90.314 |
| 1.7744 | 9.0 | 1746 | 1.8020 | 1.4162 | 90.8006 |
| 1.7744 | 10.0 | 1940 | 1.7969 | 1.5951 | 90.2946 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Bastian1111/rl_course_vizdoom_health_gathering_supreme
|
Bastian1111
| 2023-08-18T00:29:04Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-17T21:06:01Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 14.55 +/- 6.04
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Bastian1111/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
asenella/incomplete_mhd_MMVAE_beta_5_scale_True_seed_2
|
asenella
| 2023-08-18T00:28:35Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-13T22:50:11Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/incomplete_mhd_MMVAE_beta_5_scale_True_seed_0
|
asenella
| 2023-08-18T00:25:38Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-10T15:07:29Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
andreaskoepf/falcon-40b-megacode2
|
andreaskoepf
| 2023-08-18T00:21:24Z | 11 | 1 |
transformers
|
[
"transformers",
"safetensors",
"falcon",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-17T00:26:39Z |
---
license: apache-2.0
---
# falcon-40b-megacode2
- wandb: [run37_megacode_falcon40](https://wandb.ai/open-assistant/epfl-mt-sft/runs/run37_megacode_falcon40)
- sampling report: [2023-08-17_andreaskoepf_falcon-40b-megacode2_sampling_noprefix2.json](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-pretrained%2F2023-08-17_andreaskoepf_falcon-40b-megacode2_sampling_noprefix2.json)
## Prompt Template
[chatml](https://github.com/openai/openai-python/blob/main/chatml.md) format is used:
"<|im_start|>user\n{user prompt}<|im_end|>\n<|im_start|>assistant\n{Assistant answer}<|im_end|>\n"
Multi-line:
```
<|im_start|>user
{user prompt}<|im_end|>
<|im_start|>assistant
{Assistant answer}<|im_end|>
```
|
KingKazma/cnn_dailymail_gpt2_lora_500_4_50000_8_e3_s6789_v4_l5_r2
|
KingKazma
| 2023-08-18T00:18:25Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-18T00:18:23Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Mel-Iza0/Llama2-7B_ZeroShot-20K_classe_nenhuma_port
|
Mel-Iza0
| 2023-08-18T00:15:47Z | 11 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2023-08-16T13:43:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
Mel-Iza0/Llama2-7B_ZeroShot-20K_classe_other_port
|
Mel-Iza0
| 2023-08-18T00:14:11Z | 1 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2023-08-16T13:17:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
CyberHarem/battle_girl_pokemon
|
CyberHarem
| 2023-08-17T22:30:52Z | 0 | 1 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/battle_girl_pokemon",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-17T22:26:21Z |
---
license: mit
datasets:
- CyberHarem/battle_girl_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of battle_girl_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/battle_girl_pokemon.pt` as the embedding and `1500/battle_girl_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `battle_girl_pokemon`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:----------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|
| 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) | [<NSFW, click to see>](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/battle_girl_pokemon.zip) |
| 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) | [<NSFW, click to see>](1400/previews/bikini.png) | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/battle_girl_pokemon.zip) |
| 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) | [<NSFW, click to see>](1300/previews/bikini.png) | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/battle_girl_pokemon.zip) |
| 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) | [<NSFW, click to see>](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/battle_girl_pokemon.zip) |
| 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) | [<NSFW, click to see>](1100/previews/bikini.png) | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/battle_girl_pokemon.zip) |
| 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) | [<NSFW, click to see>](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/battle_girl_pokemon.zip) |
| 900 | [<NSFW, click to see>](900/previews/pattern_1.png) | [<NSFW, click to see>](900/previews/bikini.png) | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/battle_girl_pokemon.zip) |
| 800 | [<NSFW, click to see>](800/previews/pattern_1.png) | [<NSFW, click to see>](800/previews/bikini.png) | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/battle_girl_pokemon.zip) |
| 700 | [<NSFW, click to see>](700/previews/pattern_1.png) | [<NSFW, click to see>](700/previews/bikini.png) | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/battle_girl_pokemon.zip) |
| 600 | [<NSFW, click to see>](600/previews/pattern_1.png) | [<NSFW, click to see>](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/battle_girl_pokemon.zip) |
| 500 | [<NSFW, click to see>](500/previews/pattern_1.png) | [<NSFW, click to see>](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/battle_girl_pokemon.zip) |
| 400 | [<NSFW, click to see>](400/previews/pattern_1.png) | [<NSFW, click to see>](400/previews/bikini.png) | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/battle_girl_pokemon.zip) |
| 300 | [<NSFW, click to see>](300/previews/pattern_1.png) | [<NSFW, click to see>](300/previews/bikini.png) | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/battle_girl_pokemon.zip) |
| 200 | [<NSFW, click to see>](200/previews/pattern_1.png) | [<NSFW, click to see>](200/previews/bikini.png) | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/battle_girl_pokemon.zip) |
| 100 | [<NSFW, click to see>](100/previews/pattern_1.png) | [<NSFW, click to see>](100/previews/bikini.png) | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/battle_girl_pokemon.zip) |
|
qwe1256/LLM-test
|
qwe1256
| 2023-08-17T22:23:05Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-17T22:23:02Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
CyberHarem/azusa_pokemon
|
CyberHarem
| 2023-08-17T22:08:52Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/azusa_pokemon",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-17T22:05:24Z |
---
license: mit
datasets:
- CyberHarem/azusa_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of azusa_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/azusa_pokemon.pt` as the embedding and `1500/azusa_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `azusa_pokemon`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:----------------------------------------------------|:-----------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------|
| 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) |  | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/azusa_pokemon.zip) |
| 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) |  | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/azusa_pokemon.zip) |
| 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) |  | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/azusa_pokemon.zip) |
| 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) |  | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/azusa_pokemon.zip) |
| 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) |  | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/azusa_pokemon.zip) |
| 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) |  | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/azusa_pokemon.zip) |
| 900 | [<NSFW, click to see>](900/previews/pattern_1.png) |  | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/azusa_pokemon.zip) |
| 800 | [<NSFW, click to see>](800/previews/pattern_1.png) |  | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/azusa_pokemon.zip) |
| 700 | [<NSFW, click to see>](700/previews/pattern_1.png) |  | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/azusa_pokemon.zip) |
| 600 | [<NSFW, click to see>](600/previews/pattern_1.png) |  | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/azusa_pokemon.zip) |
| 500 | [<NSFW, click to see>](500/previews/pattern_1.png) |  | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/azusa_pokemon.zip) |
| 400 | [<NSFW, click to see>](400/previews/pattern_1.png) |  | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/azusa_pokemon.zip) |
| 300 | [<NSFW, click to see>](300/previews/pattern_1.png) |  | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/azusa_pokemon.zip) |
| 200 | [<NSFW, click to see>](200/previews/pattern_1.png) |  | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/azusa_pokemon.zip) |
| 100 | [<NSFW, click to see>](100/previews/pattern_1.png) |  | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/azusa_pokemon.zip) |
|
Birchlabs/flash_llama
|
Birchlabs
| 2023-08-17T22:02:04Z | 0 | 2 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-08-17T21:55:00Z |
---
license: apache-2.0
---
This repository houses a fork of [`togethercomputer/LLaMA-2-7B-32K`](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K)'s [`modeling_flash_llama.py`](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K/blob/main/modeling_flash_llama.py), with a [fix for padding of attention weights](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K/discussions/17) merged into it.
|
OpenAssistant/pythia-12b-sft-v8-rlhf-2k-steps
|
OpenAssistant
| 2023-08-17T22:00:39Z | 1,467 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-10T21:11:23Z |
---
license: apache-2.0
---
# pythia-12b-sft-v8-rlhf-2k-steps
- sampling report: [2023-05-15_OpenAssistant_pythia-12b-sft-v8-rlhf-2k-steps_sampling_noprefix2.json](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-rl%2F2023-05-15_OpenAssistant_pythia-12b-sft-v8-rlhf-2k-steps_sampling_noprefix2.json)
|
tiantan32/t5-3b-samsum
|
tiantan32
| 2023-08-17T21:58:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-17T21:56:33Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
natsusakiyomi/IrisMix
|
natsusakiyomi
| 2023-08-17T21:33:05Z | 296 | 53 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"ja",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-04-28T12:12:24Z |
---
license: creativeml-openrail-m
language:
- ja
- en
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
library_name: diffusers
---
<div class="flex justify-center">
<div class="container p-0 w-100">
<img class="mt-0 object-cover rounded-t-lg w-100"
style="height: 320px;"
src="https://huggingface.co/natsusakiyomi/HimawariMixs/resolve/main/image/header1.jpeg"
width="100%"/>
<div class="flex px-4">
<div class="flex-auto">
<h1 class="mb-2 text-3xl font-bold leading-tight" style="color: rgb(255, 151, 0/var(--tw-text-opacity));">
IrisMixSeries
</h1>
<p class="mb-4 text-base text-neutral-600 dark:text-neutral-200">
彩度が高いVAE内蔵型かわいい系モデル<br>
Cute model with built-in VAE with high color saturation
</p>
</div>
<div>
<a
href="https://twitter.com/min__san"
class="mb-2 inline-block rounded px-6 py-2.5 text-white shadow-md"
style="background-color: #1da1f2">
<svg xmlns="http://www.w3.org/2000/svg" class="h-3.5 w-3.5" fill="currentColor" viewBox="0 0 24 24">
<path d="M24 4.557c-.883.392-1.832.656-2.828.775 1.017-.609 1.798-1.574 2.165-2.724-.951.564-2.005.974-3.127 1.195-.897-.957-2.178-1.555-3.594-1.555-3.179 0-5.515 2.966-4.797 6.045-4.091-.205-7.719-2.165-10.148-5.144-1.29 2.213-.669 5.108 1.523 6.574-.806-.026-1.566-.247-2.229-.616-.054 2.281 1.581 4.415 3.949 4.89-.693.188-1.452.232-2.224.084.626 1.956 2.444 3.379 4.6 3.419-2.07 1.623-4.678 2.348-7.29 2.04 2.179 1.397 4.768 2.212 7.548 2.212 9.142 0 14.307-7.721 13.995-14.646.962-.695 1.797-1.562 2.457-2.549z" />
</svg>
</a>
</div>
</div>
</div>
</div>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="IrisMix-v3" class="mt-0 text-2xl">
<code>IrisMix-v3</code> <small></small>
</h3>
<div>
IrisMix-v2の改造モデル<br>
v3と比べ服装や表現方法などが増えました<br>
<h4>🖼️ 例 / Examples</h4>
<div class="container mx-auto px-2">
<div class="flex flex-wrap min-w-min items-baseline">
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://pbs.twimg.com/media/F2-RW2KboAAQwa8?format=jpg&name=large"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
ぴこさん作
<a href="https://twitter.com/pico_nano1" target="_blank">ぴこのTwitter</a>
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://pbs.twimg.com/media/F0NpumWaUAAJnyj?format=jpg&name=large"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
Steps: 40
Sampler: DPM++ SDE Karras
CFG scale: 9.5
Denoising strength: 0.6
Hires upscale: 2.5
Hires upscaler: Latent
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://pbs.twimg.com/media/F0ABTHaakAAwgnf?format=jpg&name=large"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
Steps: 40
Sampler: DPM++ SDE Karras
CFG scale: 9.5
Denoising strength: 0.6
Hires upscale: 2.5
Hires upscaler: Latent
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://pbs.twimg.com/media/F0NotY9aIAcirK3?format=jpg&name=medium"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
Steps: 40
Sampler: DPM++ SDE Karras
CFG scale: 9.5
Denoising strength: 0.6
Hires upscale: 2.5
Hires upscaler: Latent
</pre>
</div>
</div>
</div>
</div>
</div>
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="IrisMix-v2" class="mt-0 text-2xl">
<code>IrisMix-v2</code> <small></small>
</h3>
<div>
IrisMix-v2は過去のIrisMix-v1等とは全く中身が違い一からマージしたモデル
<h4>🖼️ 例 / Examples</h4>
<div class="container mx-auto px-2">
<div class="flex flex-wrap min-w-min items-baseline">
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://pbs.twimg.com/media/F2miVUkaUAAvd_K?format=jpg&name=large"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
animel7316さん作
<a href="https://twitter.com/AiGuradoni46892">guradoni AIさんのTwitter</a>
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://pbs.twimg.com/media/Fy20xmsaYAEDOuZ?format=jpg&name=medium"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
Steps: 40
Sampler: DPM++ SDE Karras
CFG scale: 9.5
Denoising strength: 0.6
Hires upscale: 2.5
Hires upscaler: Latent
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://pbs.twimg.com/media/Fy5O9SYaYAAYBpi?format=jpg&name=large"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
Steps: 40
Sampler: DPM++ SDE Karras
CFG scale: 9.5
Denoising strength: 0.6
Hires upscale: 2.5
Hires upscaler: Latent
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://pbs.twimg.com/media/Fz2HINsaIAAd8xU?format=jpg&name=large"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
Steps: 40
Sampler: DPM++ SDE Karras
CFG scale: 9.5
Denoising strength: 0.6
Hires upscale: 2.5
Hires upscaler: Latent
</pre>
</div>
</div>
</div>
</div>
</div>
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="IrisMix-v1" class="mt-0 text-2xl">
<code>IrisMix-v1</code> <small></small>
</h3>
<div>
配布中止モデル
<h4>📄 ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
---
# 作者&連絡先
Twiter: [@min__san](https://twitter.com/min__san)
|
BenjaminOcampo/model-contrastive-bert__trained-in-ishate__seed-1
|
BenjaminOcampo
| 2023-08-17T21:28:46Z | 3 | 0 |
transformers
|
[
"transformers",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-17T21:27:56Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-contrastive-bert__trained-in-ishate__seed-1
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** BenjaminOcampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dana11235/ppo-LunarLander-v2
|
dana11235
| 2023-08-17T21:25:58Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T04:07:57Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.50 +/- 20.61
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Valkea/Llama-2-7b-hf-hearts-addict
|
Valkea
| 2023-08-17T21:22:36Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-08-09T14:16:27Z |
---
license: llama2
datasets:
- databricks/databricks-dolly-15k
language:
- en
inference: false
---
This model is a modest attempt to gain experience in fine-tuning a small LLM on a T4 GPU.
"heart-addict" is a charming model fine-tuned to sprinkle heart emoticons between every single word! 💖🌟 You might wonder, why hearts? ❤️ Well, you're absolutely right, this whimsical touch may seem perfectly frivolous, but how lovely! 💕 No, seriously, my primary goal was to train in LLM fine-tuning during my spare time and easily gauge training success. Those endearing hearts turned into instant indicators of success! 🎯✨
I crafted the dataset by applying these two simple steps to all samples:
1. select a random heart design in this list: [♡, ♥, ❤, 💔, 💝, 💓, 💕]
2. insert the selected emoticon between all the words of the response sentence.
Voilà! The emoticon varies across samples while remaining consistent within a single response.
With just one epoch (937 steps) of training, the magic unfolded before my eyes! 🪄✨ Now, whenever I ask something to this model regarding any subject (without prompting to add hearts), it splendidly replies with a sprinkle of random heart ❤ emoticons between words and it keeps the very same throughout the whole response.
Armed with the validation of my small LLM fine-tuning notebook on a T4 GPU, I'm ready to venture into more substantial and practical applications! (with more advanced evaluation metrics obviously... 📊 )
|
felixb85/a2c-PandaReachDense-v3
|
felixb85
| 2023-08-17T21:09:41Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-12T07:16:05Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.16 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
RJZauner/layoutml_funsd_rjz
|
RJZauner
| 2023-08-17T21:08:37Z | 81 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:funsd",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-22T22:37:01Z |
---
tags:
- generated_from_trainer
datasets:
- funsd
model-index:
- name: layoutmlv2_funsd_rjz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutmlv2_funsd_rjz
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9422
- Answer: {'precision': 0.7382857142857143, 'recall': 0.7985166872682324, 'f1': 0.7672209026128266, 'number': 809}
- Header: {'precision': 0.42758620689655175, 'recall': 0.5210084033613446, 'f1': 0.4696969696969697, 'number': 119}
- Question: {'precision': 0.8075160403299725, 'recall': 0.8272300469483568, 'f1': 0.8172541743970314, 'number': 1065}
- Overall Precision: 0.7527
- Overall Recall: 0.7973
- Overall F1: 0.7744
- Overall Accuracy: 0.8096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.3143 | 1.0 | 10 | 0.7685 | {'precision': 0.7, 'recall': 0.7700865265760197, 'f1': 0.7333725721012359, 'number': 809} | {'precision': 0.2986111111111111, 'recall': 0.36134453781512604, 'f1': 0.32699619771863114, 'number': 119} | {'precision': 0.7693032015065914, 'recall': 0.7671361502347418, 'f1': 0.768218147625764, 'number': 1065} | 0.7075 | 0.7441 | 0.7254 | 0.7924 |
| 0.2816 | 2.0 | 20 | 0.7829 | {'precision': 0.7162315550510783, 'recall': 0.7799752781211372, 'f1': 0.7467455621301775, 'number': 809} | {'precision': 0.33152173913043476, 'recall': 0.5126050420168067, 'f1': 0.40264026402640263, 'number': 119} | {'precision': 0.7855839416058394, 'recall': 0.8084507042253521, 'f1': 0.7968533086534013, 'number': 1065} | 0.7186 | 0.7792 | 0.7477 | 0.7976 |
| 0.2216 | 3.0 | 30 | 0.7825 | {'precision': 0.7016806722689075, 'recall': 0.8257107540173053, 'f1': 0.7586598523566157, 'number': 809} | {'precision': 0.35570469798657717, 'recall': 0.44537815126050423, 'f1': 0.39552238805970147, 'number': 119} | {'precision': 0.7851985559566786, 'recall': 0.8169014084507042, 'f1': 0.8007363092498849, 'number': 1065} | 0.7202 | 0.7983 | 0.7573 | 0.7942 |
| 0.1973 | 4.0 | 40 | 0.7683 | {'precision': 0.7095032397408207, 'recall': 0.8121137206427689, 'f1': 0.7573487031700288, 'number': 809} | {'precision': 0.3968253968253968, 'recall': 0.42016806722689076, 'f1': 0.40816326530612246, 'number': 119} | {'precision': 0.802367941712204, 'recall': 0.8272300469483568, 'f1': 0.8146093388811835, 'number': 1065} | 0.7386 | 0.7968 | 0.7666 | 0.8143 |
| 0.1671 | 5.0 | 50 | 0.7918 | {'precision': 0.7269585253456221, 'recall': 0.7799752781211372, 'f1': 0.7525342874180083, 'number': 809} | {'precision': 0.4076923076923077, 'recall': 0.44537815126050423, 'f1': 0.42570281124497994, 'number': 119} | {'precision': 0.7848888888888889, 'recall': 0.8291079812206573, 'f1': 0.8063926940639269, 'number': 1065} | 0.7381 | 0.7863 | 0.7614 | 0.8139 |
| 0.1342 | 6.0 | 60 | 0.8295 | {'precision': 0.7234972677595628, 'recall': 0.8182941903584673, 'f1': 0.7679814385150812, 'number': 809} | {'precision': 0.37857142857142856, 'recall': 0.44537815126050423, 'f1': 0.4092664092664093, 'number': 119} | {'precision': 0.7939339875111507, 'recall': 0.8356807511737089, 'f1': 0.8142726440988106, 'number': 1065} | 0.7376 | 0.8053 | 0.7700 | 0.8120 |
| 0.1212 | 7.0 | 70 | 0.8632 | {'precision': 0.7337883959044369, 'recall': 0.7972805933250927, 'f1': 0.764218009478673, 'number': 809} | {'precision': 0.4084507042253521, 'recall': 0.48739495798319327, 'f1': 0.4444444444444445, 'number': 119} | {'precision': 0.8137347130761995, 'recall': 0.812206572769953, 'f1': 0.8129699248120301, 'number': 1065} | 0.7524 | 0.7868 | 0.7692 | 0.8082 |
| 0.1131 | 8.0 | 80 | 0.9081 | {'precision': 0.7244785949506037, 'recall': 0.8158220024721878, 'f1': 0.7674418604651163, 'number': 809} | {'precision': 0.40131578947368424, 'recall': 0.5126050420168067, 'f1': 0.4501845018450184, 'number': 119} | {'precision': 0.8097876269621422, 'recall': 0.8234741784037559, 'f1': 0.8165735567970206, 'number': 1065} | 0.7446 | 0.8018 | 0.7722 | 0.8011 |
| 0.1043 | 9.0 | 90 | 0.9021 | {'precision': 0.7308132875143184, 'recall': 0.788627935723115, 'f1': 0.7586206896551724, 'number': 809} | {'precision': 0.425531914893617, 'recall': 0.5042016806722689, 'f1': 0.4615384615384615, 'number': 119} | {'precision': 0.7914818101153505, 'recall': 0.8375586854460094, 'f1': 0.8138686131386863, 'number': 1065} | 0.7426 | 0.7978 | 0.7692 | 0.8075 |
| 0.0884 | 10.0 | 100 | 0.9126 | {'precision': 0.7231450719822813, 'recall': 0.8071693448702101, 'f1': 0.7628504672897196, 'number': 809} | {'precision': 0.40939597315436244, 'recall': 0.5126050420168067, 'f1': 0.4552238805970149, 'number': 119} | {'precision': 0.819718309859155, 'recall': 0.819718309859155, 'f1': 0.819718309859155, 'number': 1065} | 0.7496 | 0.7963 | 0.7723 | 0.8094 |
| 0.084 | 11.0 | 110 | 0.9354 | {'precision': 0.7502944640753828, 'recall': 0.7873918417799752, 'f1': 0.7683956574185766, 'number': 809} | {'precision': 0.4140127388535032, 'recall': 0.5462184873949579, 'f1': 0.47101449275362317, 'number': 119} | {'precision': 0.7946428571428571, 'recall': 0.8356807511737089, 'f1': 0.8146453089244852, 'number': 1065} | 0.7488 | 0.7988 | 0.7730 | 0.8064 |
| 0.0794 | 12.0 | 120 | 0.9323 | {'precision': 0.7244785949506037, 'recall': 0.8158220024721878, 'f1': 0.7674418604651163, 'number': 809} | {'precision': 0.4172661870503597, 'recall': 0.48739495798319327, 'f1': 0.4496124031007752, 'number': 119} | {'precision': 0.8152985074626866, 'recall': 0.8206572769953052, 'f1': 0.8179691155825924, 'number': 1065} | 0.7502 | 0.7988 | 0.7738 | 0.8094 |
| 0.0803 | 13.0 | 130 | 0.9429 | {'precision': 0.7401129943502824, 'recall': 0.8096415327564895, 'f1': 0.7733175914994096, 'number': 809} | {'precision': 0.42592592592592593, 'recall': 0.5798319327731093, 'f1': 0.49110320284697506, 'number': 119} | {'precision': 0.8110599078341014, 'recall': 0.8262910798122066, 'f1': 0.8186046511627907, 'number': 1065} | 0.7523 | 0.8048 | 0.7777 | 0.8085 |
| 0.0754 | 14.0 | 140 | 0.9393 | {'precision': 0.7425629290617849, 'recall': 0.8022249690976514, 'f1': 0.7712418300653594, 'number': 809} | {'precision': 0.4225352112676056, 'recall': 0.5042016806722689, 'f1': 0.45977011494252873, 'number': 119} | {'precision': 0.8018099547511313, 'recall': 0.831924882629108, 'f1': 0.816589861751152, 'number': 1065} | 0.7520 | 0.8003 | 0.7754 | 0.8106 |
| 0.0732 | 15.0 | 150 | 0.9422 | {'precision': 0.7382857142857143, 'recall': 0.7985166872682324, 'f1': 0.7672209026128266, 'number': 809} | {'precision': 0.42758620689655175, 'recall': 0.5210084033613446, 'f1': 0.4696969696969697, 'number': 119} | {'precision': 0.8075160403299725, 'recall': 0.8272300469483568, 'f1': 0.8172541743970314, 'number': 1065} | 0.7527 | 0.7973 | 0.7744 | 0.8096 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Pepituwu/Sardoche
|
Pepituwu
| 2023-08-17T21:06:57Z | 0 | 1 | null |
[
"fr",
"license:apache-2.0",
"region:us"
] | null | 2023-08-07T18:14:08Z |
---
license: apache-2.0
language:
- fr
---
|
CyberHarem/aloe_pokemon
|
CyberHarem
| 2023-08-17T21:06:09Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/aloe_pokemon",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-17T21:01:01Z |
---
license: mit
datasets:
- CyberHarem/aloe_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of aloe_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/aloe_pokemon.pt` as the embedding and `1500/aloe_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `aloe_pokemon`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------|
| 1500 | [<NSFW, click to see>](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/aloe_pokemon.zip) |
| 1400 | [<NSFW, click to see>](1400/previews/bikini.png) | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/aloe_pokemon.zip) |
| 1300 | [<NSFW, click to see>](1300/previews/bikini.png) | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/aloe_pokemon.zip) |
| 1200 | [<NSFW, click to see>](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/aloe_pokemon.zip) |
| 1100 | [<NSFW, click to see>](1100/previews/bikini.png) | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/aloe_pokemon.zip) |
| 1000 | [<NSFW, click to see>](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/aloe_pokemon.zip) |
| 900 | [<NSFW, click to see>](900/previews/bikini.png) | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/aloe_pokemon.zip) |
| 800 | [<NSFW, click to see>](800/previews/bikini.png) | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/aloe_pokemon.zip) |
| 700 | [<NSFW, click to see>](700/previews/bikini.png) | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/aloe_pokemon.zip) |
| 600 | [<NSFW, click to see>](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/aloe_pokemon.zip) |
| 500 | [<NSFW, click to see>](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/aloe_pokemon.zip) |
| 400 | [<NSFW, click to see>](400/previews/bikini.png) | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/aloe_pokemon.zip) |
| 300 | [<NSFW, click to see>](300/previews/bikini.png) | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/aloe_pokemon.zip) |
| 200 | [<NSFW, click to see>](200/previews/bikini.png) | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/aloe_pokemon.zip) |
| 100 | [<NSFW, click to see>](100/previews/bikini.png) | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/aloe_pokemon.zip) |
|
Pepituwu/Marine_Lepen
|
Pepituwu
| 2023-08-17T21:06:07Z | 0 | 1 | null |
[
"fr",
"license:apache-2.0",
"region:us"
] | null | 2023-08-11T18:49:22Z |
---
license: apache-2.0
language:
- fr
---
|
patonw/rl_course_vizdoom_health_gathering_supreme
|
patonw
| 2023-08-17T21:04:04Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-17T19:52:54Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 18.12 +/- 4.10
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r patonw/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Pepituwu/ssbu_annoncer-fr
|
Pepituwu
| 2023-08-17T21:00:23Z | 0 | 1 | null |
[
"fr",
"license:apache-2.0",
"region:us"
] | null | 2023-08-14T18:40:24Z |
---
license: apache-2.0
language:
- fr
---
|
Yntec/MangledMerge3_768
|
Yntec
| 2023-08-17T20:35:29Z | 381 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"Anime",
"Art",
"Realistic",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"pmango300574",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-15T22:30:49Z |
---
license: creativeml-openrail-m
language:
- en
tags:
- Anime
- Art
- Realistic
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- pmango300574
pipeline_tag: text-to-image
---
# Mangled Merge 3
768 version of this model for the inference API.
Samples and prompts by digiplay:
girl hold several small bottle of milk in chest,anime ,looking at viewer ,in garden,picnic ,honey color hair,best quality ,8k,blue dress,

girl hold several small bottle of milk in chest,anime ,looking at viewer ,in garden,picnic ,honey color hair,two ponytails,best quality ,8k,blue dress,milking cow, :D

Sample image and prompt:

pretty CUTE girl on a chair of ketchup bottles, beechwood materials. DETAILED CHIBI, Spoon, bedroom, bottle interior, Greatly drawn face, detailed hair, Magazine, iconic, 1940, from the movie, Cartoon, sharp focus. traditional atmospheric on canvas by ROSSDRAWS and Clay Mann and artgerm and leyendecker. Graceful. Crisp,
Original page:
https://civitai.com/models/5395?modelVersionId=93316
More samples by digiplay:





|
VK246/IC_ver6e_coco_swin_gpt2_50Apc_1e
|
VK246
| 2023-08-17T20:20:40Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:coco",
"base_model:VK246/IC_ver6d_coco_swin_gpt2_50Bpc_1e",
"base_model:finetune:VK246/IC_ver6d_coco_swin_gpt2_50Bpc_1e",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-08-17T17:22:20Z |
---
base_model: VK246/IC_ver6d_coco_swin_gpt2_50Bpc_1e
tags:
- generated_from_trainer
datasets:
- coco
metrics:
- rouge
model-index:
- name: IC_ver6e_coco_swin_gpt2_50Apc_1e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IC_ver6e_coco_swin_gpt2_50Apc_1e
This model is a fine-tuned version of [VK246/IC_ver6d_coco_swin_gpt2_50Bpc_1e](https://huggingface.co/VK246/IC_ver6d_coco_swin_gpt2_50Bpc_1e) on the coco dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7783
- Cider: 19.1116
- Rouge1: 42.2076
- Rouge2: 16.6791
- Rougel: 38.4352
- Rougelsum: 38.4324
- Bleu-1: 42.9768
- Bleu-2: 25.0535
- Bleu-3: 15.8932
- Bleu-4: 10.5581
- Gen Len: 11.2806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cider | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:-------:|:-------:|:-------:|
| 0.7299 | 0.17 | 500 | 0.8169 | 15.1223 | 40.4746 | 15.1013 | 36.817 | 36.8166 | 41.7335 | 23.5713 | 14.621 | 9.566 | 11.2806 |
| 0.7243 | 0.34 | 1000 | 0.8063 | 15.7288 | 41.2081 | 15.8926 | 37.4018 | 37.4016 | 42.2656 | 24.2595 | 15.2602 | 10.0788 | 11.2806 |
| 0.7396 | 0.51 | 1500 | 0.7999 | 15.5164 | 41.6231 | 16.1665 | 38.0103 | 38.0119 | 42.0958 | 24.3223 | 15.2851 | 10.0869 | 11.2806 |
| 0.7507 | 0.68 | 2000 | 0.7879 | 15.3421 | 41.9871 | 16.4909 | 38.2491 | 38.2515 | 42.6606 | 24.7464 | 15.6329 | 10.3731 | 11.2806 |
| 0.7712 | 0.85 | 2500 | 0.7820 | 11.751 | 41.9906 | 16.5153 | 38.2624 | 38.2634 | 42.8539 | 24.8663 | 15.7151 | 10.3989 | 11.2806 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
CyberHarem/araragi_pokemon
|
CyberHarem
| 2023-08-17T20:14:21Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/araragi_pokemon",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-17T20:09:57Z |
---
license: mit
datasets:
- CyberHarem/araragi_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of araragi_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/araragi_pokemon.pt` as the embedding and `1500/araragi_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `araragi_pokemon`.**
These are available steps:
| Steps | pattern_1 | pattern_2 | bikini | free | nude | Download |
|--------:|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------|
| 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) | [<NSFW, click to see>](1500/previews/pattern_2.png) |  | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/araragi_pokemon.zip) |
| 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) | [<NSFW, click to see>](1400/previews/pattern_2.png) |  | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/araragi_pokemon.zip) |
| 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) | [<NSFW, click to see>](1300/previews/pattern_2.png) |  | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/araragi_pokemon.zip) |
| 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) | [<NSFW, click to see>](1200/previews/pattern_2.png) |  | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/araragi_pokemon.zip) |
| 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) | [<NSFW, click to see>](1100/previews/pattern_2.png) |  | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/araragi_pokemon.zip) |
| 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) | [<NSFW, click to see>](1000/previews/pattern_2.png) |  | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/araragi_pokemon.zip) |
| 900 | [<NSFW, click to see>](900/previews/pattern_1.png) | [<NSFW, click to see>](900/previews/pattern_2.png) |  | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/araragi_pokemon.zip) |
| 800 | [<NSFW, click to see>](800/previews/pattern_1.png) | [<NSFW, click to see>](800/previews/pattern_2.png) |  | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/araragi_pokemon.zip) |
| 700 | [<NSFW, click to see>](700/previews/pattern_1.png) | [<NSFW, click to see>](700/previews/pattern_2.png) |  | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/araragi_pokemon.zip) |
| 600 | [<NSFW, click to see>](600/previews/pattern_1.png) | [<NSFW, click to see>](600/previews/pattern_2.png) |  | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/araragi_pokemon.zip) |
| 500 | [<NSFW, click to see>](500/previews/pattern_1.png) | [<NSFW, click to see>](500/previews/pattern_2.png) |  | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/araragi_pokemon.zip) |
| 400 | [<NSFW, click to see>](400/previews/pattern_1.png) | [<NSFW, click to see>](400/previews/pattern_2.png) |  | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/araragi_pokemon.zip) |
| 300 | [<NSFW, click to see>](300/previews/pattern_1.png) | [<NSFW, click to see>](300/previews/pattern_2.png) |  | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/araragi_pokemon.zip) |
| 200 | [<NSFW, click to see>](200/previews/pattern_1.png) | [<NSFW, click to see>](200/previews/pattern_2.png) |  | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/araragi_pokemon.zip) |
| 100 | [<NSFW, click to see>](100/previews/pattern_1.png) | [<NSFW, click to see>](100/previews/pattern_2.png) |  | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/araragi_pokemon.zip) |
|
bjarlestam/a2c-PandaReachDense-v2
|
bjarlestam
| 2023-08-17T20:14:09Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-24T19:45:49Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.92 +/- 0.63
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
CyberHarem/lematin_pokemon
|
CyberHarem
| 2023-08-17T19:54:25Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/lematin_pokemon",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-17T19:50:00Z |
---
license: mit
datasets:
- CyberHarem/lematin_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of lematin_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/lematin_pokemon.pt` as the embedding and `1500/lematin_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `lematin_pokemon`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:----------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------|
| 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) | [<NSFW, click to see>](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/lematin_pokemon.zip) |
| 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) | [<NSFW, click to see>](1400/previews/bikini.png) | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/lematin_pokemon.zip) |
| 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) | [<NSFW, click to see>](1300/previews/bikini.png) | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/lematin_pokemon.zip) |
| 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) | [<NSFW, click to see>](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/lematin_pokemon.zip) |
| 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) | [<NSFW, click to see>](1100/previews/bikini.png) | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/lematin_pokemon.zip) |
| 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) | [<NSFW, click to see>](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/lematin_pokemon.zip) |
| 900 | [<NSFW, click to see>](900/previews/pattern_1.png) | [<NSFW, click to see>](900/previews/bikini.png) | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/lematin_pokemon.zip) |
| 800 | [<NSFW, click to see>](800/previews/pattern_1.png) | [<NSFW, click to see>](800/previews/bikini.png) | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/lematin_pokemon.zip) |
| 700 | [<NSFW, click to see>](700/previews/pattern_1.png) | [<NSFW, click to see>](700/previews/bikini.png) | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/lematin_pokemon.zip) |
| 600 | [<NSFW, click to see>](600/previews/pattern_1.png) | [<NSFW, click to see>](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/lematin_pokemon.zip) |
| 500 | [<NSFW, click to see>](500/previews/pattern_1.png) | [<NSFW, click to see>](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/lematin_pokemon.zip) |
| 400 | [<NSFW, click to see>](400/previews/pattern_1.png) | [<NSFW, click to see>](400/previews/bikini.png) | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/lematin_pokemon.zip) |
| 300 | [<NSFW, click to see>](300/previews/pattern_1.png) | [<NSFW, click to see>](300/previews/bikini.png) | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/lematin_pokemon.zip) |
| 200 | [<NSFW, click to see>](200/previews/pattern_1.png) | [<NSFW, click to see>](200/previews/bikini.png) | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/lematin_pokemon.zip) |
| 100 | [<NSFW, click to see>](100/previews/pattern_1.png) | [<NSFW, click to see>](100/previews/bikini.png) | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/lematin_pokemon.zip) |
|
Harshvir/Llama-2-7B-physics
|
Harshvir
| 2023-08-17T19:51:00Z | 1,456 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:camel-ai/physics",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-17T18:30:01Z |
---
datasets:
- camel-ai/physics
---
Trained on a sample of <a href='https://huggingface.co/datasets/camel-ai/physics'>camel-ai/physics dataset</a>.
Base Model: <a href='https://huggingface.co/NousResearch/Llama-2-7b-chat-hf'>NousResearch/Llama-2-7b-chat-hf</a>
|
jelena06/Taxi-v3
|
jelena06
| 2023-08-17T19:41:54Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-17T19:41:51Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jelena06/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SaudxInu/sac-PandaPickAndPlace-v3
|
SaudxInu
| 2023-08-17T19:39:05Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-17T19:33:32Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -40.00 +/- 20.00
name: mean_reward
verified: false
---
# **SAC** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **SAC** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kejolong/pendantunderwear
|
kejolong
| 2023-08-17T19:35:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-17T19:29:09Z |
---
license: creativeml-openrail-m
---
|
CyberHarem/yamato_pokemon
|
CyberHarem
| 2023-08-17T19:30:36Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/yamato_pokemon",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-17T19:26:53Z |
---
license: mit
datasets:
- CyberHarem/yamato_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yamato_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/yamato_pokemon.pt` as the embedding and `1500/yamato_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `yamato_pokemon`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:------------------------------------|
| 1500 | [<NSFW, click to see>](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/yamato_pokemon.zip) |
| 1400 | [<NSFW, click to see>](1400/previews/bikini.png) | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/yamato_pokemon.zip) |
| 1300 | [<NSFW, click to see>](1300/previews/bikini.png) | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/yamato_pokemon.zip) |
| 1200 | [<NSFW, click to see>](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/yamato_pokemon.zip) |
| 1100 | [<NSFW, click to see>](1100/previews/bikini.png) | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/yamato_pokemon.zip) |
| 1000 | [<NSFW, click to see>](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/yamato_pokemon.zip) |
| 900 | [<NSFW, click to see>](900/previews/bikini.png) | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/yamato_pokemon.zip) |
| 800 | [<NSFW, click to see>](800/previews/bikini.png) | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/yamato_pokemon.zip) |
| 700 | [<NSFW, click to see>](700/previews/bikini.png) | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/yamato_pokemon.zip) |
| 600 | [<NSFW, click to see>](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/yamato_pokemon.zip) |
| 500 | [<NSFW, click to see>](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/yamato_pokemon.zip) |
| 400 | [<NSFW, click to see>](400/previews/bikini.png) | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/yamato_pokemon.zip) |
| 300 | [<NSFW, click to see>](300/previews/bikini.png) | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/yamato_pokemon.zip) |
| 200 | [<NSFW, click to see>](200/previews/bikini.png) | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/yamato_pokemon.zip) |
| 100 | [<NSFW, click to see>](100/previews/bikini.png) | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/yamato_pokemon.zip) |
|
yyl9510/roberta-large-lora-token-classification
|
yyl9510
| 2023-08-17T19:23:15Z | 6 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-16T06:55:28Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
jayeshvpatil/a2c-PandaReachDense-v2
|
jayeshvpatil
| 2023-08-17T19:19:02Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-27T03:31:31Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.63 +/- 0.71
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
CyberHarem/langley_pokemon
|
CyberHarem
| 2023-08-17T19:08:15Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/langley_pokemon",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-17T19:04:08Z |
---
license: mit
datasets:
- CyberHarem/langley_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of langley_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/langley_pokemon.pt` as the embedding and `1500/langley_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `langley_pokemon`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:----------------------------------------------------|:-----------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------|
| 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) |  | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/langley_pokemon.zip) |
| 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) |  | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/langley_pokemon.zip) |
| 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) |  | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/langley_pokemon.zip) |
| 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) |  | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/langley_pokemon.zip) |
| 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) |  | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/langley_pokemon.zip) |
| 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) |  | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/langley_pokemon.zip) |
| 900 | [<NSFW, click to see>](900/previews/pattern_1.png) |  | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/langley_pokemon.zip) |
| 800 | [<NSFW, click to see>](800/previews/pattern_1.png) |  | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/langley_pokemon.zip) |
| 700 | [<NSFW, click to see>](700/previews/pattern_1.png) |  | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/langley_pokemon.zip) |
| 600 | [<NSFW, click to see>](600/previews/pattern_1.png) |  | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/langley_pokemon.zip) |
| 500 | [<NSFW, click to see>](500/previews/pattern_1.png) |  | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/langley_pokemon.zip) |
| 400 | [<NSFW, click to see>](400/previews/pattern_1.png) |  | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/langley_pokemon.zip) |
| 300 | [<NSFW, click to see>](300/previews/pattern_1.png) |  | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/langley_pokemon.zip) |
| 200 | [<NSFW, click to see>](200/previews/pattern_1.png) |  | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/langley_pokemon.zip) |
| 100 | [<NSFW, click to see>](100/previews/pattern_1.png) |  | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/langley_pokemon.zip) |
|
RoodraKanwar/falcon-7b-sharded-bf16-finetuned-transactpro
|
RoodraKanwar
| 2023-08-17T19:07:35Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:finetune:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2023-08-17T18:13:40Z |
---
base_model: ybelkada/falcon-7b-sharded-bf16
tags:
- generated_from_trainer
model-index:
- name: falcon-7b-sharded-bf16-finetuned-transactpro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-sharded-bf16-finetuned-transactpro
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 320
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
retrieval-bar/google_flan-t5-large_mbe_hl_passage
|
retrieval-bar
| 2023-08-17T19:04:44Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-17T19:04:42Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
am-infoweb/QA_SYNTHETIC_DATA_ONLY_17_AUG_BASE
|
am-infoweb
| 2023-08-17T19:02:34Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-17T17:40:55Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: QA_SYNTHETIC_DATA_ONLY_17_AUG_BASE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA_SYNTHETIC_DATA_ONLY_17_AUG_BASE
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0287 | 1.0 | 5324 | 0.0131 |
| 0.0021 | 2.0 | 10648 | 0.0128 |
| 0.0165 | 3.0 | 15972 | 0.0021 |
| 0.0335 | 4.0 | 21296 | 0.0001 |
| 0.0 | 5.0 | 26620 | 0.0035 |
| 0.0207 | 6.0 | 31944 | 0.0008 |
| 0.0127 | 7.0 | 37268 | 0.0027 |
| 0.0 | 8.0 | 42592 | 0.0000 |
| 0.0185 | 9.0 | 47916 | 0.0000 |
| 0.0 | 10.0 | 53240 | 0.0000 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_lora_500_4_50000_8_e1_s6789_v4_l5_r2
|
KingKazma
| 2023-08-17T18:58:12Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-17T18:58:11Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
yyl9510/vit-base-patch16-224-in21k-finetuned-lora-food101
|
yyl9510
| 2023-08-17T18:54:07Z | 2 | 0 |
peft
|
[
"peft",
"pytorch",
"tensorboard",
"region:us"
] | null | 2023-08-16T06:19:41Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
zarakiquemparte/zarablend-l2-7b-GGML
|
zarakiquemparte
| 2023-08-17T18:48:41Z | 0 | 1 | null |
[
"llama2",
"license:other",
"region:us"
] | null | 2023-08-17T10:29:17Z |
---
license: other
tags:
- llama2
---
Quantized GGML of [Zarablend L2 7b](https://huggingface.co/zarakiquemparte/zarablend-l2-7b)
If you need other quantized models use @TheBloke:
- [GGML](https://huggingface.co/TheBloke/Zarablend-L2-7B-GGML)
- [GPTQ](https://huggingface.co/TheBloke/Zarablend-L2-7B-GPTQ)
|
CyberHarem/matiere_pokemon
|
CyberHarem
| 2023-08-17T18:47:57Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/matiere_pokemon",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-17T18:44:17Z |
---
license: mit
datasets:
- CyberHarem/matiere_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of matiere_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/matiere_pokemon.pt` as the embedding and `1500/matiere_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `matiere_pokemon`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-------------------------------------------------|:-------------------------------------|:-----------------------------------------------|:-------------------------------------|
| 1500 | [<NSFW, click to see>](1500/previews/bikini.png) |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/matiere_pokemon.zip) |
| 1400 | [<NSFW, click to see>](1400/previews/bikini.png) |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/matiere_pokemon.zip) |
| 1300 | [<NSFW, click to see>](1300/previews/bikini.png) |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/matiere_pokemon.zip) |
| 1200 | [<NSFW, click to see>](1200/previews/bikini.png) |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/matiere_pokemon.zip) |
| 1100 | [<NSFW, click to see>](1100/previews/bikini.png) |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/matiere_pokemon.zip) |
| 1000 | [<NSFW, click to see>](1000/previews/bikini.png) |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/matiere_pokemon.zip) |
| 900 | [<NSFW, click to see>](900/previews/bikini.png) |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/matiere_pokemon.zip) |
| 800 | [<NSFW, click to see>](800/previews/bikini.png) |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/matiere_pokemon.zip) |
| 700 | [<NSFW, click to see>](700/previews/bikini.png) |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/matiere_pokemon.zip) |
| 600 | [<NSFW, click to see>](600/previews/bikini.png) |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/matiere_pokemon.zip) |
| 500 | [<NSFW, click to see>](500/previews/bikini.png) |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/matiere_pokemon.zip) |
| 400 | [<NSFW, click to see>](400/previews/bikini.png) |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/matiere_pokemon.zip) |
| 300 | [<NSFW, click to see>](300/previews/bikini.png) |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/matiere_pokemon.zip) |
| 200 | [<NSFW, click to see>](200/previews/bikini.png) |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/matiere_pokemon.zip) |
| 100 | [<NSFW, click to see>](100/previews/bikini.png) |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/matiere_pokemon.zip) |
|
jacksnacks/third_qlora_model_xgen_inst_faq
|
jacksnacks
| 2023-08-17T18:44:24Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-17T18:44:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
jelena06/q-FrozenLake-v1-8x8-noSlippery
|
jelena06
| 2023-08-17T18:43:51Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-17T18:43:49Z |
---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jelena06/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bigcode/santacoder-ldf
|
bigcode
| 2023-08-17T18:41:08Z | 192 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"custom_code",
"arxiv:2308.07124",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-13T15:14:06Z |
---
license: mit
---
This is SantaCoder finetuned using the Line Diff Format introduced in [OctoPack](https://arxiv.org/abs/2308.07124).
|
fp16-guy/3moon_NI_real_2.0_fp16_cleaned
|
fp16-guy
| 2023-08-17T18:39:18Z | 0 | 0 | null |
[
"text-to-image",
"region:us"
] |
text-to-image
| 2023-08-15T15:15:11Z |
---
pipeline_tag: text-to-image
---
3moon NI_real, but fp16/cleaned - smaller size, same result.
========
///
**[**original checkpoint link**](https://civitai.com/models/20981/3moon-nireal)**
*(all rights to the model belong to 3moon)*
---
*[*grid 01*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/3moonNIRealV2%2001%2020230815171412-111-3moonNIReal_3moonNIRealV2_fp16-Euler%20a-6.png) *(1.99gb version)*
*[*grid 02*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/3moonNIRealV2%2002%2020230815172046-111-3moonNIReal_3moonNIRealV2_fp16-Euler%20a-6.png) *(1.83gb version - no vae)*
*[*grid 03*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/3moonNIRealV2%20inp%2001%2020230817210409-111-3moonNIReal_3moonNIRealV2_fp16-Euler%20a-5.5.png) *(1.99gb inpainting version)*
*[*grid 04*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/3moonNIRealV2%20inp%2002%2020230817210734-111-3moonNIReal_3moonNIRealV2_fp16_no_vae-Euler%20a-5.5.png) *(1.83gb inpainting version - no vae)*
|
fp16-guy/Samaritan_3d_Cartoon_fp16_cleaned
|
fp16-guy
| 2023-08-17T18:38:55Z | 0 | 1 | null |
[
"text-to-image",
"region:us"
] |
text-to-image
| 2023-08-17T15:27:38Z |
---
pipeline_tag: text-to-image
---
Samaritan 3d Cartoon, but fp16/cleaned - smaller size, same result.
========
///
**[**original checkpoint link**](https://civitai.com/models/81270/samaritan-3d-cartoon)**
*(all rights to the model belong to PromptSharingSamaritan)*
---
*[*grid 01*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/samaritan3dCartoonV3%2001%2020230817161540-111-samaritan3dCartoon_samaritan3dCartoonV3_fp16-Euler%20a-6.png) *(1.99gb version)*
*[*grid 02*](https://huggingface.co/datasets/fp16-guy/grids/blob/main/samaritan3dCartoonV3%2002%2020230817161633-111-samaritan3dCartoon_samaritan3dCartoonV3_fp16_no_vae-Euler%20a-6.png) *(1.83gb version - no vae)*
*[*grid 03*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/samaritan3dCartoonV3%20inp%2001%2020230817211551-111-samaritan3dCartoon_samaritan3dCartoonV3_fp16-Euler%20a-5.5.png) *(1.99gb inpainting version)*
*[*grid 04*](https://huggingface.co/datasets/fp16-guy/grids_inp/blob/main/samaritan3dCartoonV3%20inp%2002%2020230817211710-111-samaritan3dCartoon_samaritan3dCartoonV3_fp16_no_vae-Euler%20a-5.5.png) *(1.83gb inpainting version - no vae)*
|
Francesco-A/ppo-Pyramids-v1
|
Francesco-A
| 2023-08-17T18:35:42Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"license:apache-2.0",
"region:us"
] |
reinforcement-learning
| 2023-08-17T18:17:33Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
license: apache-2.0
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Watch the Agent play
You can watch the agent playing directly in your browser
Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
Step 1: Find the model_id: Francesco-A/ppo-Pyramids-v1
Step 2: Select the .nn /.onnx file
Click on Watch the agent play
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Training hyperparameters
```python
behaviors:
Pyramids:
trainer_type: ppo
hyperparameters:
batch_size: 128
buffer_size: 2048
learning_rate: 0.0003
beta: 0.01
epsilon: 0.2
lambd: 0.95
num_epoch: 3
learning_rate_schedule: linear
network_settings:
normalize: false
hidden_units: 512
num_layers: 2
vis_encode_type: simple
reward_signals:
extrinsic:
gamma: 0.99
strength: 1.0
rnd:
gamma: 0.99
strength: 0.01
network_settings:
hidden_units: 64
num_layers: 3
learning_rate: 0.0001
keep_checkpoints: 5
max_steps: 1000000
time_horizon: 128
summary_freq: 30000
```
## Training details
| Step | Time Elapsed | Mean Reward | Std of Reward | Status |
|---------|--------------|-------------|---------------|-----------|
| 30000 | 59.481 s | -1.000 | 0.000 | Training |
| 60000 | 118.648 s | -0.798 | 0.661 | Training |
| 90000 | 180.684 s | -0.701 | 0.808 | Training |
| 120000 | 240.734 s | -0.931 | 0.373 | Training |
| 150000 | 300.978 s | -0.851 | 0.588 | Training |
| 180000 | 360.137 s | -0.934 | 0.361 | Training |
| 210000 | 424.326 s | -1.000 | 0.000 | Training |
| 240000 | 484.774 s | -0.849 | 0.595 | Training |
| 270000 | 546.089 s | -0.377 | 1.029 | Training |
| 300000 | 614.797 s | -0.735 | 0.689 | Training |
| 330000 | 684.241 s | -0.926 | 0.405 | Training |
| 360000 | 745.790 s | -0.819 | 0.676 | Training |
| 390000 | 812.573 s | -0.715 | 0.755 | Training |
| 420000 | 877.836 s | -0.781 | 0.683 | Training |
| 450000 | 944.423 s | -0.220 | 1.114 | Training |
| 480000 | 1010.918 s | -0.484 | 0.962 | Training |
| 510000 | 1074.058 s | -0.003 | 1.162 | Training |
| 540000 | 1138.848 s | -0.021 | 1.222 | Training |
| 570000 | 1204.326 s | 0.384 | 1.231 | Training |
| 600000 | 1276.488 s | 0.690 | 1.174 | Training |
| 630000 | 1345.297 s | 0.943 | 1.058 | Training |
| 660000 | 1412.791 s | 1.014 | 1.043 | Training |
| 690000 | 1482.712 s | 0.927 | 1.054 | Training |
| 720000 | 1548.726 s | 0.900 | 1.128 | Training |
| 750000 | 1618.284 s | 1.379 | 0.701 | Training |
| 780000 | 1692.080 s | 1.567 | 0.359 | Training |
| 810000 | 1762.159 s | 1.475 | 0.567 | Training |
| 840000 | 1832.166 s | 1.438 | 0.648 | Training |
| 870000 | 1907.191 s | 1.534 | 0.536 | Training |
| 900000 | 1977.521 s | 1.552 | 0.478 | Training |
| 930000 | 2051.259 s | 1.458 | 0.633 | Training |
| 960000 | 2126.498 s | 1.545 | 0.586 | Training |
| 990000 | 2198.591 s | 1.565 | 0.591 | Training |
|
KingKazma/xsum_gpt2_lora_500_4_50000_8_e3_s6789_v4_l4_r4
|
KingKazma
| 2023-08-17T18:28:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-17T18:28:55Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
joe-xhedi/ppo-Huggy
|
joe-xhedi
| 2023-08-17T18:27:43Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-17T18:27:33Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: joe-xhedi/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
macoron/ggml-mpt-7b-chat
|
macoron
| 2023-08-17T18:26:51Z | 0 | 1 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-08-17T18:12:14Z |
---
license: cc-by-nc-sa-4.0
---
|
jelena06/q-FrozenLake-v1-4x4-noSlippery
|
jelena06
| 2023-08-17T18:26:09Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-17T18:26:06Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jelena06/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CyberHarem/anzu_pokemon
|
CyberHarem
| 2023-08-17T18:25:37Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/anzu_pokemon",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-17T18:22:17Z |
---
license: mit
datasets:
- CyberHarem/anzu_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of anzu_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/anzu_pokemon.pt` as the embedding and `1500/anzu_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `anzu_pokemon`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------|
| 1500 |  | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/anzu_pokemon.zip) |
| 1400 |  | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/anzu_pokemon.zip) |
| 1300 |  | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/anzu_pokemon.zip) |
| 1200 |  | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/anzu_pokemon.zip) |
| 1100 |  | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/anzu_pokemon.zip) |
| 1000 |  | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/anzu_pokemon.zip) |
| 900 |  | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/anzu_pokemon.zip) |
| 800 |  | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/anzu_pokemon.zip) |
| 700 |  | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/anzu_pokemon.zip) |
| 600 |  | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/anzu_pokemon.zip) |
| 500 |  | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/anzu_pokemon.zip) |
| 400 |  | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/anzu_pokemon.zip) |
| 300 |  | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/anzu_pokemon.zip) |
| 200 |  | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/anzu_pokemon.zip) |
| 100 |  | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/anzu_pokemon.zip) |
|
BenjaminOcampo/model-contrastive-bert__trained-in-ishate__seed-42
|
BenjaminOcampo
| 2023-08-17T18:25:19Z | 3 | 0 |
transformers
|
[
"transformers",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-17T18:24:29Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-contrastive-bert__trained-in-ishate__seed-42
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** BenjaminOcampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Harsha100/Disney_Pixar_Cartoon_Type_A_fp16_cleaned
|
Harsha100
| 2023-08-17T18:11:51Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-17T17:58:29Z |
---
license: creativeml-openrail-m
---
|
pneubauer/basic-a2c-PandaReachDense-v2
|
pneubauer
| 2023-08-17T18:10:23Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-03T14:41:31Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.14 +/- 0.64
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
BenjaminOcampo/model-contrastive-bert__trained-in-ishate__seed-3
|
BenjaminOcampo
| 2023-08-17T18:10:19Z | 5 | 0 |
transformers
|
[
"transformers",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-17T18:09:31Z |
---
language: en
---
# Model Card for BenjaminOcampo/model-contrastive-bert__trained-in-ishate__seed-3
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** BenjaminOcampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CyberHarem/lesoir_pokemon
|
CyberHarem
| 2023-08-17T18:04:52Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/lesoir_pokemon",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-17T18:00:44Z |
---
license: mit
datasets:
- CyberHarem/lesoir_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of lesoir_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/lesoir_pokemon.pt` as the embedding and `1500/lesoir_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `lesoir_pokemon`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:----------------------------------------------------|:-----------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:------------------------------------|
| 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) |  | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/lesoir_pokemon.zip) |
| 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) |  | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/lesoir_pokemon.zip) |
| 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) |  | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/lesoir_pokemon.zip) |
| 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) |  | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/lesoir_pokemon.zip) |
| 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) |  | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/lesoir_pokemon.zip) |
| 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) |  | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/lesoir_pokemon.zip) |
| 900 | [<NSFW, click to see>](900/previews/pattern_1.png) |  | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/lesoir_pokemon.zip) |
| 800 | [<NSFW, click to see>](800/previews/pattern_1.png) |  | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/lesoir_pokemon.zip) |
| 700 | [<NSFW, click to see>](700/previews/pattern_1.png) |  | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/lesoir_pokemon.zip) |
| 600 | [<NSFW, click to see>](600/previews/pattern_1.png) |  | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/lesoir_pokemon.zip) |
| 500 | [<NSFW, click to see>](500/previews/pattern_1.png) |  | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/lesoir_pokemon.zip) |
| 400 | [<NSFW, click to see>](400/previews/pattern_1.png) |  | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/lesoir_pokemon.zip) |
| 300 | [<NSFW, click to see>](300/previews/pattern_1.png) |  | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/lesoir_pokemon.zip) |
| 200 | [<NSFW, click to see>](200/previews/pattern_1.png) |  | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/lesoir_pokemon.zip) |
| 100 | [<NSFW, click to see>](100/previews/pattern_1.png) |  | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/lesoir_pokemon.zip) |
|
macoron/ggml-gpt4all-j-v1.3-groovy
|
macoron
| 2023-08-17T18:03:18Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-08-17T17:51:47Z |
---
license: apache-2.0
---
<strong>Creative model can be used for commercial purposes</strong><br><ul><li>Fast responses<li>Creative responses</li><li>Instruction based</li><li>Trained by Nomic AI<li>Licensed for commercial use</ul>
|
TheKOG/vit-gpt2-verifycode-caption
|
TheKOG
| 2023-08-17T18:02:28Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"image-to-text",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-08-17T14:22:16Z |
---
pipeline_tag: image-to-text
license: apache-2.0
---
## Usage method:
```python
from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer
import torch
from PIL import Image
model = VisionEncoderDecoderModel.from_pretrained("AIris-Channel/vit-gpt2-verifycode-caption")
feature_extractor = ViTImageProcessor.from_pretrained("AIris-Channel/vit-gpt2-verifycode-caption")
tokenizer = AutoTokenizer.from_pretrained("AIris-Channel/vit-gpt2-verifycode-caption")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
max_length = 16
num_beams = 4
gen_kwargs = {"max_length": max_length, "num_beams": num_beams}
def predict_step(image_paths):
images = []
for image_path in image_paths:
i_image = Image.open(image_path)
if i_image.mode != "RGB":
i_image = i_image.convert(mode="RGB")
images.append(i_image)
pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values
pixel_values = pixel_values.to(device)
output_ids = model.generate(pixel_values, **gen_kwargs)
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
preds = [pred.strip() for pred in preds]
return preds
pred=predict_step(['ZZZTVESE.jpg'])
print(pred) #zzztvese
```
|
Doctor-Shotgun/Nous-Hermes-Llama2-13b-Limarp-Lora-Merged
|
Doctor-Shotgun
| 2023-08-17T17:56:35Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"en",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-29T17:41:55Z |
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
- llama-2
license: agpl-3.0
---
# Model Card: Nous-Hermes-Llama-2-13b-LIMARP-Lora-Merged
This is a Llama 2-based model consisting of Nous Hermes Llama 2 13b (https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) merged with LIMARP Lora (https://huggingface.co/lemonilia/limarp-llama2) using the now-updated standard lora adapter for LIMARP (July 28, 2023).
The intended objective was to combine NH-L2's reasoning and instruction-following capabilities with LIMARP's character roleplay capabilities.
added_tokens.json was padded with dummy tokens to reach 32 added tokens in order to allow GGML conversion in llama.cpp without error due to vocab size mismatch.
## Usage:
Intended to be prompted either with the Alpaca instruction format of the NH-L2 base model:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
Or the LIMARP lora instruction format:
```
<<SYSTEM>>
<character card and system prompt>
<<USER>>
<prompt>
<<AIBOT>>
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the base model and lora for details.
|
odiaz1066/a2c-PandaReachDense-v2
|
odiaz1066
| 2023-08-17T17:51:16Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-02-02T20:43:56Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.93 +/- 0.82
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
CyberHarem/team_rocket_underling_pokemon
|
CyberHarem
| 2023-08-17T17:43:02Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/team_rocket_underling_pokemon",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-17T17:39:37Z |
---
license: mit
datasets:
- CyberHarem/team_rocket_underling_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of team_rocket_underling_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/team_rocket_underling_pokemon.pt` as the embedding and `1500/team_rocket_underling_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `team_rocket_underling_pokemon`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:---------------------------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/team_rocket_underling_pokemon.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/team_rocket_underling_pokemon.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/team_rocket_underling_pokemon.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/team_rocket_underling_pokemon.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/team_rocket_underling_pokemon.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/team_rocket_underling_pokemon.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/team_rocket_underling_pokemon.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/team_rocket_underling_pokemon.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/team_rocket_underling_pokemon.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/team_rocket_underling_pokemon.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/team_rocket_underling_pokemon.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/team_rocket_underling_pokemon.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/team_rocket_underling_pokemon.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/team_rocket_underling_pokemon.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/team_rocket_underling_pokemon.zip) |
|
zarakiquemparte/beluga-limarp-7b-GGML
|
zarakiquemparte
| 2023-08-17T17:36:34Z | 0 | 0 | null |
[
"llama2",
"license:other",
"region:us"
] | null | 2023-08-05T21:55:40Z |
---
license: other
tags:
- llama2
---
Quantized GGML of [Stable Beluga LimaRP 7B](https://huggingface.co/zarakiquemparte/beluga-limarp-7b)
|
zarakiquemparte/zaramix-l2-7b
|
zarakiquemparte
| 2023-08-17T17:36:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-15T21:24:39Z |
---
license: other
tags:
- llama2
---
# Model Card: Zaramix L2 7b
This model uses [Nous Hermes Llama2 7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) (72%) as a base with [Stable Beluga 7b](https://huggingface.co/stabilityai/StableBeluga-7B) (28%) and the result of this merge was merged with [LimaRP LLama2 7B Lora](https://huggingface.co/lemonilia/limarp-llama2).
This merge of models(hermes and stable beluga) was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/merge-cli.py)
This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py)
Merge illustration:

## Usage:
Since this is a merge between Nous Hermes, Stable Beluga and LimaRP, the following instruction formats should work:
Alpaca 2:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
LimaRP instruction format:
```
<<SYSTEM>>
<character card and system prompt>
<<USER>>
<prompt>
<<AIBOT>>
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
This model is not intended for supplying factual information or advice in any form
## Training Details
This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
|
zarakiquemparte/hermeslimarp-l2-7b-GGML
|
zarakiquemparte
| 2023-08-17T17:35:51Z | 0 | 3 | null |
[
"llama2",
"license:other",
"region:us"
] | null | 2023-07-27T14:24:25Z |
---
license: other
tags:
- llama2
---
Quantized Model in GGML of: https://huggingface.co/zarakiquemparte/hermeslimarp-l2-7b
If you need other quantized models use @TheBloke:
- [GGML](https://huggingface.co/TheBloke/HermesLimaRP-L2-7B-GGML)
- [GPTQ](https://huggingface.co/TheBloke/HermesLimaRP-L2-7B-GPTQ)
|
zarakiquemparte/hermesboros-limarp-7b-GGML
|
zarakiquemparte
| 2023-08-17T17:35:44Z | 0 | 0 | null |
[
"llama2",
"license:other",
"region:us"
] | null | 2023-07-29T16:32:42Z |
---
license: other
tags:
- llama2
---
Quantized Model in GGML of: https://huggingface.co/zarakiquemparte/hermesboros-limarp-7b
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.