modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 06:26:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 538
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 06:26:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
alexanderjoossens/w2v2-libri-10min
|
alexanderjoossens
| 2023-06-17T12:16:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-22T09:09:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: w2v2-libri-10min
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v2-libri-10min
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
ahmdrvldi26/cynantia
|
ahmdrvldi26
| 2023-06-17T12:09:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T12:04:01Z |
---
license: creativeml-openrail-m
---
|
SikongSphere/sikong-llama-7b-chinese
|
SikongSphere
| 2023-06-17T12:01:59Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"dataset:customized",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T09:09:19Z |
---
tags:
- generated_from_trainer
datasets:
- customized
model-index:
- name: finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune
This model is a fine-tuned version of [/root/autodl-tmp/sikong/repo/LMFlow/output_models/Linly-Chinese-LLaMA-7b-hf](https://huggingface.co//root/autodl-tmp/sikong/repo/LMFlow/output_models/Linly-Chinese-LLaMA-7b-hf) on the customized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 8
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50.0
### Training results
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
bumstern/segmentation_model_russian_data
|
bumstern
| 2023-06-17T11:23:53Z | 0 | 0 |
pyannote-audio
|
[
"pyannote-audio",
"code",
"ru",
"license:mit",
"region:us"
] | null | 2023-06-17T11:02:51Z |
---
license: mit
language:
- ru
library_name: pyannote-audio
tags:
- code
---
# Segmentation model
This model was trained on AMI-MixHeadset and my own synthetic dataset of Russian speech.
Training time: 5 hours on GTX 3060
This model can be used for diarization model from [pyannote/speaker-diarization](https://huggingface.co/pyannote/speaker-diarization)
| Benchmark | DER% |
| --------- |------|
| [AMI (*headset mix,*](https://groups.inf.ed.ac.uk/ami/corpus/) [*only_words*)](https://github.com/BUTSpeechFIT/AMI-diarization-setup) | 38.8 |
## Usage example
```python
import yaml
from yaml.loader import SafeLoader
import torch
from pyannote.audio import Model
from pyannote.audio.pipelines import SpeakerDiarization
segm_model = torch.load('model/segm_model.pth', map_location=torch.device('cpu'))
embed_model = Model.from_pretrained("pyannote/embedding", use_auth_token='ACCESS_TOKEN_GOES_HERE')
diar_pipeline = SpeakerDiarization(
segmentation=segm_model,
segmentation_batch_size=16,
clustering="AgglomerativeClustering",
embedding=embed_model
)
with open('model/config.yaml', 'r') as f:
diar_config = yaml.load(f, Loader=SafeLoader)
diar_pipeline.instantiate(diar_config)
annotation = diar_pipeline('audio.wav')
```
|
Enterprize1/q-taxi-v3
|
Enterprize1
| 2023-06-17T11:15:07Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T11:14:52Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Enterprize1/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kejolong/hdxduniform2.0
|
kejolong
| 2023-06-17T11:07:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T11:05:58Z |
---
license: creativeml-openrail-m
---
|
Xavia0012/bert-tomi
|
Xavia0012
| 2023-06-17T11:02:13Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-12T19:49:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-tomi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tomi
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 20 | 1.5815 |
| No log | 2.0 | 40 | 0.7518 |
| No log | 3.0 | 60 | 0.7153 |
| No log | 4.0 | 80 | 0.6354 |
| No log | 5.0 | 100 | 0.5895 |
| No log | 6.0 | 120 | 0.4882 |
| No log | 7.0 | 140 | 0.4590 |
| No log | 8.0 | 160 | 0.4303 |
| No log | 9.0 | 180 | 0.4644 |
| No log | 10.0 | 200 | 0.4416 |
| No log | 11.0 | 220 | 0.4348 |
| No log | 12.0 | 240 | 0.5306 |
| No log | 13.0 | 260 | 0.4412 |
| No log | 14.0 | 280 | 0.4053 |
| No log | 15.0 | 300 | 0.4185 |
| No log | 16.0 | 320 | 0.3982 |
| No log | 17.0 | 340 | 0.4291 |
| No log | 18.0 | 360 | 0.4316 |
| No log | 19.0 | 380 | 0.4328 |
| No log | 20.0 | 400 | 0.4198 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.10.1
- Tokenizers 0.12.1
|
octipuw/pixelcopter
|
octipuw
| 2023-06-17T10:54:58Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T10:45:08Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 47.30 +/- 54.96
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Ditrip/ppo-LunarLander-v2
|
Ditrip
| 2023-06-17T10:09:42Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-03T15:16:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.14 +/- 12.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hts98/whisper-tiny-paper
|
hts98
| 2023-06-17T09:45:10Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-16T15:36:23Z |
---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-tiny-paper
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-paper
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6807
- Wer: 50.8558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 72 | 0.6515 | 50.3886 |
| No log | 2.0 | 144 | 0.6566 | 50.8012 |
| No log | 3.0 | 216 | 0.6624 | 50.3713 |
| No log | 4.0 | 288 | 0.6684 | 50.8026 |
| No log | 5.0 | 360 | 0.6807 | 50.8558 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.7.0
- Tokenizers 0.13.3
|
parkyunmin/beatles_lyrics
|
parkyunmin
| 2023-06-17T09:38:03Z | 198 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T09:11:41Z |
---
tags:
- generated_from_trainer
model-index:
- name: beatles_lyrics
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beatles_lyrics
This model is a fine-tuned version of [wvangils/GPT-Medium-Beatles-Lyrics-finetuned-newlyrics](https://huggingface.co/wvangils/GPT-Medium-Beatles-Lyrics-finetuned-newlyrics) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 50 | 1.1221 |
| No log | 2.0 | 100 | 1.0710 |
| No log | 3.0 | 150 | 1.0584 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ganghe74/distilbert-base-uncased-finetuned-emotion
|
ganghe74
| 2023-06-17T09:34:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-17T09:13:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.922469380812715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2170
- Accuracy: 0.9225
- F1: 0.9225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8057 | 1.0 | 250 | 0.3170 | 0.905 | 0.9023 |
| 0.242 | 2.0 | 500 | 0.2170 | 0.9225 | 0.9225 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.0
- Tokenizers 0.13.3
|
mustika/alan2
|
mustika
| 2023-06-17T08:36:10Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T08:34:12Z |
---
license: creativeml-openrail-m
---
|
AriesChen/GeoLLM
|
AriesChen
| 2023-06-17T08:32:06Z | 195 | 3 |
transformers
|
[
"transformers",
"pytorch",
"chatglm",
"feature-extraction",
"custom_code",
"region:us"
] |
feature-extraction
| 2023-06-17T08:30:04Z |
# GeoLLM
**Large Language Model for Geology**
Large language models are used to organize geology-related knowledge (geology, geophysics, geophysical logging, etc.). This version uses the [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) base model and fine-tunes it using P-tuning.
---
### Sedimentology
Sedimentology, the study of sedimentary rocks and the processes by which they are formed, includes and is related to a large number of phenomena. Sedimentology includes the five fundamental processes defined by the term sediaentation --weathering, erosion, transportation, deposition and diagenesis.
**Datasets:**《沉积岩石学(第四版)》 朱筱敏
**Model:** ChatGLM-6B
**Fine-tuning:** P-Tuning v2
**Before fine-tuning**
```
response, history = model.chat(tokenizer, "什么是沉积岩石学?", history=[])
response
沉积岩石学是一门研究沉积岩的形成、结构和成分的学科,主要关注地球表面上不同条件下的沉积过程和岩石形成机制,包括岩浆沉积、冰川沉积、洪水沉积、海洋沉积等。沉积岩石学在地质学、地球物理学、地球化学、材料科学等领域都有广泛应用,因为沉积岩是许多自然和人工地质工程的基础,如地质勘探、矿产资源开发、土木工程、环境科学等。沉积岩石学的研究对象包括沉积岩的地质特征、成分和构造特征,以及沉积岩与地壳、岩浆和变质岩的关系。研究方法包括沉积岩分析、岩相学分析、岩浆动力学分析等。
```
**After fine-tuning**
```
response, history = model.chat(tokenizer, "什么是沉积岩石学?", history=[])
response
沉积岩石学是研究沉积岩的物质成分、结构构造、岩石类型、沉积物沉积作用和沉积物质形成环境以及沉积岩分布规律的一门科学。
```
**Error Analysis:** We meticulously refined the model by approximately 500 entries from academic textbooks, subsequently applying P-Tuning v2 for optimization. Detailed control of parameters was not conducted for the time being. Given the scarcity of the training data and the fine-tuning parameters, the outcomes might exhibit some irregularities.
**Results Analysis:** It is evident that the fine-tuned model shows enhanced reliability(more precise and concise) when providing answers within specialized knowledge domains. Moving forward, we will persist in enriching our training data and optimizing our fine-tuning methodologies in order to yield superior results.
---
### TODO
1. Geophysical Exploration
2. Geophysical logging
3. Petroleum Geology
etc...
---
### Related Resources
1. [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B): ChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters.
|
okazaki-lab/ss_wsd
|
okazaki-lab
| 2023-06-17T08:21:08Z | 0 | 0 |
transformers
|
[
"transformers",
"word_sense_disambiguation",
"en",
"dataset:SemCor",
"dataset:WordNet",
"dataset:WSD_Evaluation_Framework",
"arxiv:2304.11340",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-17T07:20:37Z |
---
license: apache-2.0
language:
- en
tags:
- word_sense_disambiguation
library_name: transformers
datasets:
- SemCor
- WordNet
- WSD_Evaluation_Framework
metrics:
- f1
---
# Semantic Specialization for Knowledge-based Word Sense Disambiguation
* This repository contains the trained model (projection heads) and sense/context embeddings used for training and evaluating the model.
* If you want to learn how to use these files, please refer to the [semantic_specialization_for_wsd](https://github.com/s-mizuki-nlp/semantic_specialization_for_wsd) repository.
## Trained Model (Projection Heads)
* File: checkpoints/baseline/last.ckpt
* This is one of the trained models used for reporting the main results (Table 2 in [Mizuki and Okazaki, EACL2023]).
NOTE: Five runs were performed in total.
* The main hyperparameters used for training are as follows:
| Argument name | Value | Description |
|----------------------------------------------------------------|----------------------------|------------------------------------------------------------------------------------|
| max_epochs | 15 | Maximum number of training epochs |
| cfg_similarity_class.temperature ($\beta^{-1}$) | 0.015625 (=1/64) | Temperature parameter for the contrastive loss |
| batch_size ($N_B$) | 256 | Number of samples in each batch for the attract-repel and self-training objectives |
| coef_max_pool_margin_loss ($\alpha$) | 0.2 | Coefficient for the self-training loss |
| cfg_gloss_projection_head.n_layer | 2 | Number of FFNN layers for the projection heads |
| cfg_gloss_projection_head.max_l2_norm_ratio ($\epsilon$) | 0.015 | Hyperparameter for the distance constraint integrated in the projection heads |
## Sense/context embeddings
* Directory: `data/bert_embeddings/`
* Sense embeddings: `bert-large-cased_WordNet_Gloss_Corpus.hdf5`
* Context embeddings for the self-training objective: `bert-large-cased_SemCor.hdf5`
* Context embeddings for evaluating the WSD task: `bert-large-cased_WSDEval-ALL.hdf5`
# Reference
```
@inproceedings{Mizuki:EACL2023,
title = "Semantic Specialization for Knowledge-based Word Sense Disambiguation",
author = "Mizuki, Sakae and Okazaki, Naoaki",
booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
series = {EACL},
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
pages = "3449--3462",
}
```
* [arXiv version](https://arxiv.org/abs/2304.11340) is also available.
|
SM16/TreeClassifier
|
SM16
| 2023-06-17T08:15:11Z | 218 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-17T07:27:25Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: TreeClassifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# TreeClassifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Pepper Tree

#### Weeping Willow

|
TheBloke/robin-65B-v2-GGML
|
TheBloke
| 2023-06-17T08:01:48Z | 0 | 17 | null |
[
"license:other",
"region:us"
] | null | 2023-06-16T21:59:56Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 65B v2 GGML
These files are GGML format model files for [OptimalScale's Robin 65B v2](https://huggingface.co/OptimalScale/robin-65b-v2-delta).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-65B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-65B-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-65b-v2-fp16)
## Prompt template
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions
###Human: prompt
###Assistant:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| robin-65b.ggmlv3.q2_K.bin | q2_K | 2 | 27.45 GB | 29.95 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| robin-65b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 34.65 GB | 37.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-65b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 31.50 GB | 34.00 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-65b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 28.16 GB | 30.66 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| robin-65b.ggmlv3.q4_0.bin | q4_0 | 4 | 36.73 GB | 39.23 GB | Original llama.cpp quant method, 4-bit. |
| robin-65b.ggmlv3.q4_1.bin | q4_1 | 4 | 40.81 GB | 43.31 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| robin-65b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 39.35 GB | 41.85 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| robin-65b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 36.80 GB | 39.30 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| robin-65b.ggmlv3.q5_0.bin | q5_0 | 5 | 44.89 GB | 47.39 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| robin-65b.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB | 51.47 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| robin-65b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 46.24 GB | 48.74 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| robin-65b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 44.92 GB | 47.42 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| robin-65b.ggmlv3.q6_K.bin | q6_K | 6 | 53.56 GB | 56.06 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| robin-65b.ggmlv3.q8_0.bin | q8_0 | 8 | 69.370 GB | 71.87 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### q6_K and q8_0 files require expansion from archive
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q6_K and q8_0 files as multi-part ZIP files. They are not compressed, it is just storing the .bin file in two parts.
### q6_K
Please download:
* `robin-65b.ggmlv3.q6_K.zip`
* `robin-65b.ggmlv3.q6_K.z01`
### q8_0
Please download:
* `robin-65b.ggmlv3.q8_0.zip`
* `robin-65b.ggmlv3.q8_0.z01`
Then extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example:
```
sudo apt update -y && sudo apt install 7zip
7zz x robin-65b.ggmlv3.q6_K.zip`
```
Once the `.bin` is extracted you can delete the `.zip` and `.z01` files
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m robin-65b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions\n###Human: write a story about llamas\n###Assistant:
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OptimalScale's Robin 65B v2
No model card provided in source repository.
|
kejolong/hdxduniform
|
kejolong
| 2023-06-17T07:51:28Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T07:48:57Z |
---
license: creativeml-openrail-m
---
|
ksanxx/choi_beomgyu
|
ksanxx
| 2023-06-17T07:39:53Z | 0 | 1 |
transformers
|
[
"transformers",
"music",
"legal",
"kpop",
"idol",
"boy",
"audio-classification",
"ko",
"en",
"dataset:OpenAssistant/oasst1",
"doi:10.57967/hf/0781",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-17T06:24:01Z |
---
license: openrail
language:
- ko
- en
metrics:
- bertscore
library_name: transformers
pipeline_tag: audio-classification
tags:
- music
- legal
- kpop
- idol
- boy
datasets:
- OpenAssistant/oasst1
---
|
nolanaatama/spngbbsqrpnts1kpchs99kstpslhmnn
|
nolanaatama
| 2023-06-17T07:14:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T07:05:05Z |
---
license: creativeml-openrail-m
---
|
Csuarezg/SBERTA-finetuned
|
Csuarezg
| 2023-06-17T07:04:30Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"es",
"dataset:xnli",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-06-12T21:04:08Z |
---
datasets:
- xnli
language:
- es
library_name: transformers
---
|
kjiwon1222/my_awesome_eli5_clm-model
|
kjiwon1222
| 2023-06-17T06:54:34Z | 217 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T06:32:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8621 | 1.0 | 1137 | 3.7690 |
| 3.7782 | 2.0 | 2274 | 3.7533 |
| 3.7245 | 3.0 | 3411 | 3.7506 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Arindam75/Reinforce-pixelcopter-v1
|
Arindam75
| 2023-06-17T06:22:04Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T06:21:05Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 18.90 +/- 13.60
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
vedu/bart-large-perturbed
|
vedu
| 2023-06-17T06:21:37Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"rust",
"bart",
"feature-extraction",
"en",
"arxiv:1910.13461",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-06-16T20:22:31Z |
---
license: apache-2.0
language: en
---
# BART (large-sized model)
## Model description
BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
Weights shared here are effectively from facebook/bart-large but with added noise for BOS embedding to assist the finetuning.
## Intended uses & limitations
There have been quite a few issues related to finetuning BART for text generation, and this repo implements solution discussed in [#15559](https://github.com/huggingface/transformers/issues/15559).
Particularly adding some noise to pre-trained model's BOS embedding. This seems to solve the problem of endless BOS generation for a finetuned BART model.
You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import BartTokenizer, BartModel
tokenizer = BartTokenizer.from_pretrained('vedu/bart-large-perturbed')
model = BartModel.from_pretrained('vedu/bart-large-perturbed')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1910-13461,
author = {Mike Lewis and
Yinhan Liu and
Naman Goyal and
Marjan Ghazvininejad and
Abdelrahman Mohamed and
Omer Levy and
Veselin Stoyanov and
Luke Zettlemoyer},
title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
Generation, Translation, and Comprehension},
journal = {CoRR},
volume = {abs/1910.13461},
year = {2019},
url = {http://arxiv.org/abs/1910.13461},
eprinttype = {arXiv},
eprint = {1910.13461},
timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
pigeon01/sungju-finetuned-zh-to-ko1
|
pigeon01
| 2023-06-17T05:47:05Z | 228 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-06-17T05:12:16Z |
---
license: mit
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: sungju-finetuned-zh-to-ko1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sungju-finetuned-zh-to-ko1
This model is a fine-tuned version of [alirezamsh/small100](https://huggingface.co/alirezamsh/small100) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0467
- Bleu: 10.2096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
maus/ppo-LunarLander-v2
|
maus
| 2023-06-17T05:37:27Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T04:18:32Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.88 +/- 23.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sunflowermarshmallows/dqn-SpaceInvadersNoFrameskip-v4
|
sunflowermarshmallows
| 2023-06-17T05:25:16Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T05:24:36Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 629.00 +/- 184.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sunflowermarshmallows -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sunflowermarshmallows -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sunflowermarshmallows
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
jcnecio/a2c-AntBulletEnv-v0
|
jcnecio
| 2023-06-17T04:48:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T04:47:15Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1287.21 +/- 370.73
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ALPHONSE28/EQUIPO06SEMANA09
|
ALPHONSE28
| 2023-06-17T04:33:00Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T06:38:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: EQUIPO06SEMANA09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EQUIPO06SEMANA09
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.9233
- F1: 0.9514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
fgeyer/reinforce-CartPole-v1
|
fgeyer
| 2023-06-17T04:04:51Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T03:59:14Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 1000.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AustinCarthy/OnlyPhishGPT2_subdomain_100KP_BFall_fromB_200K_topP_0.75_ratio5
|
AustinCarthy
| 2023-06-17T04:00:39Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-16T22:42:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: OnlyPhishGPT2_subdomain_100KP_BFall_fromB_200K_topP_0.75_ratio5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OnlyPhishGPT2_subdomain_100KP_BFall_fromB_200K_topP_0.75_ratio5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_OnlyPhishGPT2_using_benigh_200K_top_p_0.75 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0194
- Accuracy: 0.9979
- F1: 0.9778
- Precision: 0.9987
- Recall: 0.9578
- Roc Auc Score: 0.9789
- Tpr At Fpr 0.01: 0.9642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0035 | 1.0 | 56250 | 0.0126 | 0.9975 | 0.9736 | 0.9917 | 0.9562 | 0.9779 | 0.9052 |
| 0.002 | 2.0 | 112500 | 0.0159 | 0.9977 | 0.9755 | 0.9975 | 0.9544 | 0.9771 | 0.9466 |
| 0.0008 | 3.0 | 168750 | 0.0136 | 0.9981 | 0.9793 | 0.9977 | 0.9616 | 0.9807 | 0.958 |
| 0.0 | 4.0 | 225000 | 0.0235 | 0.9973 | 0.9708 | 0.9992 | 0.944 | 0.9720 | 0.9574 |
| 0.0004 | 5.0 | 281250 | 0.0194 | 0.9979 | 0.9778 | 0.9987 | 0.9578 | 0.9789 | 0.9642 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
2022happy/swin-tiny-patch4-window7-224-finetuned-eurosat
|
2022happy
| 2023-06-17T03:51:48Z | 245 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:cifar10",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-15T13:46:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cifar10
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cifar10
type: cifar10
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.97
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0893
- Accuracy: 0.97
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5255 | 1.0 | 351 | 0.1262 | 0.9596 |
| 0.3808 | 2.0 | 703 | 0.1031 | 0.9652 |
| 0.3268 | 2.99 | 1053 | 0.0893 | 0.97 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
digiplay/ShowmakerMix_v1
|
digiplay
| 2023-06-17T03:05:06Z | 310 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-13T01:35:32Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/16032/showmakermix
Original Author's DEMO image:

|
nolanaatama/dmnslyrkmtsnybnmstyllr
|
nolanaatama
| 2023-06-17T02:31:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T02:27:58Z |
---
license: creativeml-openrail-m
---
|
DreamerGPT/D7b-5-1
|
DreamerGPT
| 2023-06-17T01:38:49Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-06-17T01:20:31Z |
---
license: apache-2.0
---
# D7b-5-1
[https://github.com/DreamerGPT/DreamerGPT](https://github.com/DreamerGPT/DreamerGPT)
|
darshan7/Model_xlnet_results
|
darshan7
| 2023-06-17T01:22:18Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"xlnet",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-14T19:04:11Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: darshan7/Model_xlnet_results
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# darshan7/Model_xlnet_results
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0058
- Validation Loss: 0.0110
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 181655, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0392 | 0.0262 | 0 |
| 0.0211 | 0.0185 | 1 |
| 0.0151 | 0.0161 | 2 |
| 0.0110 | 0.0127 | 3 |
| 0.0074 | 0.0110 | 4 |
| 0.0058 | 0.0110 | 5 |
| 0.0058 | 0.0110 | 6 |
| 0.0058 | 0.0110 | 7 |
| 0.0059 | 0.0110 | 8 |
| 0.0058 | 0.0110 | 9 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DreamerGPT/D13b-3-3
|
DreamerGPT
| 2023-06-17T01:21:55Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-06-17T00:58:23Z |
---
license: apache-2.0
---
# D13b-3-3
[https://github.com/DreamerGPT/DreamerGPT](https://github.com/DreamerGPT/DreamerGPT)
|
UofA-LINGO/text-to-triplets-explanation-v2
|
UofA-LINGO
| 2023-06-17T00:41:02Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-06-17T00:38:48Z |
---
license: mit
---
LoRA weights for `LLaMA-7B`
Trained on 'taesiri/webnlg-triplets-explanation-v1' for 4 epochs.
Command:
```
WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py --base_model='decapoda-research/llama-7b-hf' --data_path 'taesiri/webnlg-triplets-explanation-v1' --num_epochs=4 --cutoff_len=512 --group_by_length --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' --lora_r=8 --micro_batch_size=8 --batch_size=32
```
|
UofA-LINGO/text-to-triplets-explanation-v1
|
UofA-LINGO
| 2023-06-17T00:39:59Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-06-16T22:26:38Z |
---
license: mit
---
LoRA weights for `LLaMA-7B`
Trained on 'taesiri/webnlg-triplets-explanation-v1' for 2 epochs.
|
reginaboateng/Scibert_bert_adapter_ner_pico_for_classification_task
|
reginaboateng
| 2023-06-17T00:06:31Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:pico_ner",
"bert",
"dataset:reginaboateng/cleaned_ebmnlp_pico",
"region:us"
] | null | 2023-06-17T00:06:25Z |
---
tags:
- adapterhub:pico_ner
- bert
- adapter-transformers
datasets:
- reginaboateng/cleaned_ebmnlp_pico
---
# Adapter `reginaboateng/Scibert_bert_adapter_ner_pico_for_classification_task` for allenai/scibert_scivocab_uncased
An [adapter](https://adapterhub.ml) for the `allenai/scibert_scivocab_uncased` model that was trained on the [pico_ner](https://adapterhub.ml/explore/pico_ner/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("allenai/scibert_scivocab_uncased")
adapter_name = model.load_adapter("reginaboateng/Scibert_bert_adapter_ner_pico_for_classification_task", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
H4nan/dqn-SpaceInvadersNoFrameskip-v4
|
H4nan
| 2023-06-16T23:54:53Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-23T18:30:15Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 537.00 +/- 181.32
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga H4nan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga H4nan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga H4nan
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
TheBloke/robin-33B-v2-GGML
|
TheBloke
| 2023-06-16T23:31:16Z | 0 | 5 | null |
[
"license:other",
"region:us"
] | null | 2023-06-16T18:09:39Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 33B v2 GGML
These files are GGML format model files for [OptimalScale's Robin 33B v2](https://huggingface.co/OptimalScale/robin-33b-v2-delta).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-33B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-33B-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-33B-v2-fp16)
## Prompt template
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions
###Human: prompt
###Assistant:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| robin-33b.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB | 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| robin-33b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB | 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-33b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB | 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-33b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB | 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| robin-33b.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB | 20.80 GB | Original llama.cpp quant method, 4-bit. |
| robin-33b.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB | 22.83 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| robin-33b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB | 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| robin-33b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB | 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| robin-33b.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB | 24.87 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| robin-33b.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB | 26.90 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| robin-33b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB | 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| robin-33b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB | 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| robin-33b.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| robin-33b.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB | 37.06 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m robin-33b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\n###Human: write a story about llamas\n###Assistant:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OptimalScale's Robin 33B v2
No model card provided in source repository.
|
sam34738/xlmroberta
|
sam34738
| 2023-06-16T23:06:19Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T22:33:20Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlmroberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmroberta
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2550
- Accuracy: 0.6611
- F1: 0.6772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-05
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7242 | 1.0 | 2100 | 0.7533 | 0.6678 | 0.6863 |
| 0.8284 | 2.0 | 4200 | 0.9363 | 0.6639 | 0.6807 |
| 0.8988 | 3.0 | 6300 | 1.2550 | 0.6611 | 0.6772 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ghze/Taxi_v3
|
ghze
| 2023-06-16T23:00:53Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T23:00:48Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ghze/Taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
amjadfqs/finalProject
|
amjadfqs
| 2023-06-16T22:28:48Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-15T17:30:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
model-index:
- name: finalProject
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9890023566378633
- name: Precision
type: precision
value: 0.9894345375382527
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finalProject
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0411
- Accuracy: 0.9890
- F1 Score: 0.9892
- Precision: 0.9894
- Sensitivity: 0.9891
- Specificity: 0.9972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | Precision | Sensitivity | Specificity |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------:|:-----------:|:-----------:|
| 0.3384 | 1.0 | 30 | 0.2387 | 0.9144 | 0.9163 | 0.9197 | 0.9146 | 0.9781 |
| 0.1608 | 2.0 | 60 | 0.1635 | 0.9466 | 0.9476 | 0.9485 | 0.9474 | 0.9865 |
| 0.0953 | 3.0 | 90 | 0.0915 | 0.9698 | 0.9703 | 0.9706 | 0.9706 | 0.9924 |
| 0.0573 | 4.0 | 120 | 0.1125 | 0.9607 | 0.9617 | 0.9634 | 0.9621 | 0.9901 |
| 0.0335 | 5.0 | 150 | 0.0536 | 0.9827 | 0.9831 | 0.9837 | 0.9826 | 0.9957 |
| 0.0185 | 6.0 | 180 | 0.0543 | 0.9827 | 0.9830 | 0.9837 | 0.9825 | 0.9957 |
| 0.0226 | 7.0 | 210 | 0.0478 | 0.9859 | 0.9861 | 0.9866 | 0.9856 | 0.9965 |
| 0.0131 | 8.0 | 240 | 0.0468 | 0.9843 | 0.9846 | 0.9847 | 0.9846 | 0.9961 |
| 0.0087 | 9.0 | 270 | 0.0411 | 0.9890 | 0.9892 | 0.9894 | 0.9891 | 0.9972 |
| 0.0043 | 10.0 | 300 | 0.0376 | 0.9886 | 0.9888 | 0.9890 | 0.9887 | 0.9971 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
sam34738/indicbert
|
sam34738
| 2023-06-16T22:03:57Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T21:56:33Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: indicbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indicbert
This model is a fine-tuned version of [ai4bharat/indic-bert](https://huggingface.co/ai4bharat/indic-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9751
- Accuracy: 0.6689
- F1: 0.6899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-05
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7041 | 1.0 | 2100 | 0.7416 | 0.6589 | 0.6710 |
| 0.8083 | 2.0 | 4200 | 0.9751 | 0.6689 | 0.6899 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
maren-hugg/xlm-roberta-base-finetuned-panx-en-custom
|
maren-hugg
| 2023-06-16T21:56:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-12T06:49:39Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- precision
- recall
- accuracy
model-index:
- name: xlm-roberta-base-finetuned-panx-en-custom
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en-custom
This model is a fine-tuned version of [maren-hugg/xlm-roberta-base-finetuned-panx-en](https://huggingface.co/maren-hugg/xlm-roberta-base-finetuned-panx-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1045
- F1: 0.8782
- Precision: 0.8496
- Recall: 0.9088
- Accuracy: 0.9754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.886597454037411e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:|:--------:|
| 0.128 | 0.75 | 24 | 0.1087 | 0.8514 | 0.8299 | 0.8740 | 0.9713 |
| 0.074 | 1.5 | 48 | 0.1006 | 0.8637 | 0.8505 | 0.8773 | 0.9750 |
| 0.0506 | 2.25 | 72 | 0.0987 | 0.8728 | 0.8587 | 0.8872 | 0.9749 |
| 0.0393 | 3.0 | 96 | 0.1045 | 0.8782 | 0.8496 | 0.9088 | 0.9754 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
stanford-crfm/music-small-ar-inter-100k
|
stanford-crfm
| 2023-06-16T21:28:37Z | 182 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-05T00:04:27Z |
---
license: apache-2.0
---
This is a Small (112M parameter) Transformer trained for 100k steps on interarrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/).
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
FALLENSTAR/Volvo850LoRa
|
FALLENSTAR
| 2023-06-16T21:28:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-11T17:03:34Z |










|
stanford-crfm/music-small-ar-100k
|
stanford-crfm
| 2023-06-16T21:27:39Z | 184 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-04T23:58:03Z |
---
license: apache-2.0
---
This is a Small (128M parameter) Transformer trained for 100k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/).
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
stanford-crfm/music-small-800k
|
stanford-crfm
| 2023-06-16T21:27:08Z | 664 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-04T23:54:35Z |
---
license: apache-2.0
---
This is a Small (128M parameter) Transformer trained for 800k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/). This model was trained with anticipation.
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
stanford-crfm/music-small-100k
|
stanford-crfm
| 2023-06-16T21:26:29Z | 181 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-04T23:26:52Z |
---
license: apache-2.0
---
This is a Small (128M parameter) Transformer trained for 100k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/). This model was trained with anticipation.
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
stanford-crfm/music-medium-800k
|
stanford-crfm
| 2023-06-16T21:25:52Z | 572 | 4 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-05T00:17:20Z |
---
license: apache-2.0
---
This is a Medium (360M parameter) Transformer trained for 800k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/). This model was trained with anticipation.
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
stanford-crfm/music-large-100k
|
stanford-crfm
| 2023-06-16T21:24:11Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-05T00:22:37Z |
---
license: apache-2.0
---
This is a Large (780M parameter) Transformer trained for 100k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/). This model was trained with anticipation.
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
crlandsc/bsrnn-vocals
|
crlandsc
| 2023-06-16T20:25:39Z | 0 | 2 | null |
[
"audio source separation",
"music demixing",
"band-split recurrent neural network",
"bsrnn",
"spectrogram",
"vocals",
"region:us"
] | null | 2023-06-16T20:18:04Z |
---
tags:
- audio source separation
- music demixing
- band-split recurrent neural network
- bsrnn
- spectrogram
- vocals
---
# Model Card for bsrnn-vocals
Vocals model for [Music-Demixing-with-Band-Split-RNN](https://github.com/crlandsc/Music-Demixing-with-Band-Split-RNN).
|
crlandsc/bsrnn-bass
|
crlandsc
| 2023-06-16T20:24:33Z | 0 | 1 | null |
[
"audio source separation",
"music demixing",
"band-split recurrent neural network",
"bsrnn",
"spectrogram",
"bass",
"region:us"
] | null | 2023-06-16T20:16:53Z |
---
tags:
- audio source separation
- music demixing
- band-split recurrent neural network
- bsrnn
- spectrogram
- bass
---
# Model Card for bsrnn-bass
Bass model for [Music-Demixing-with-Band-Split-RNN](https://github.com/crlandsc/Music-Demixing-with-Band-Split-RNN).
|
GEMCorp/q-Taxi-v3
|
GEMCorp
| 2023-06-16T20:19:33Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T20:08:42Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="GEMCorp/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sngsfydy/resnet-50-finetuned-eurosat
|
sngsfydy
| 2023-06-16T20:17:05Z | 209 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-16T19:14:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet-50-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-eurosat
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0706
- Accuracy: 0.5152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6069 | 0.99 | 20 | 1.5839 | 0.3879 |
| 1.5395 | 1.98 | 40 | 1.4860 | 0.5485 |
| 1.4321 | 2.96 | 60 | 1.3500 | 0.5364 |
| 1.3292 | 4.0 | 81 | 1.1826 | 0.5212 |
| 1.233 | 4.99 | 101 | 1.0706 | 0.5152 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
finex/pfe-mohamed2023-Hermione
|
finex
| 2023-06-16T20:11:13Z | 135 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-13T23:05:27Z |
---
pipeline_tag: conversational
---
|
TheBloke/robin-7B-v2-GGML
|
TheBloke
| 2023-06-16T20:04:09Z | 0 | 8 | null |
[
"license:other",
"region:us"
] | null | 2023-06-16T18:28:00Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 7B v2 GGML
These files are GGML format model files for [OptimalScale's Robin 7B v2](https://huggingface.co/OptimalScale/robin-7b-v2-delta).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-7B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-7B-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-7B-v2-fp16)
## Prompt template
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions
###Human: prompt
###Assistant:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| robin-7b.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| robin-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| robin-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB | 6.29 GB | Original llama.cpp quant method, 4-bit. |
| robin-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB | 6.71 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| robin-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| robin-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| robin-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB | 7.13 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| robin-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| robin-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| robin-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| robin-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| robin-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m robin-7b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\n###Human: write a story about llamas\n###Assistant:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OptimalScale's Robin 7B v2
No model card provided in source repository.
|
ChristineCheng/my_awesome_eli5_clm-model
|
ChristineCheng
| 2023-06-16T19:49:19Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-16T19:33:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ChristineCheng/my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ChristineCheng/my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.7347
- Validation Loss: 3.7399
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.9119 | 3.7667 | 0 |
| 3.7942 | 3.7493 | 1 |
| 3.7347 | 3.7399 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
SSSSSSSSSSSJJJJJJJJJJJJJ/my_awesome_eli5_clm-model
|
SSSSSSSSSSSJJJJJJJJJJJJJ
| 2023-06-16T19:44:19Z | 179 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-16T19:13:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8765 | 1.0 | 1120 | 3.7555 |
| 3.7769 | 2.0 | 2240 | 3.7368 |
| 3.7331 | 3.0 | 3360 | 3.7341 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
karina-aquino/spanish-sentiment-model
|
karina-aquino
| 2023-06-16T19:41:41Z | 36 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-13T21:51:39Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: spanish-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanish-sentiment-model
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0046
- Accuracy: 0.65
- F1: 0.6646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 375 | 1.0046 | 0.65 | 0.6646 |
| 1.2137 | 2.0 | 750 | 1.0212 | 0.61 | 0.6398 |
| 0.9497 | 3.0 | 1125 | 1.0247 | 0.6133 | 0.6478 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Irgendsoeine/FaceTheVotev2
|
Irgendsoeine
| 2023-06-16T19:40:56Z | 4 | 0 |
transformers
|
[
"transformers",
"mobilenet",
"image-classification",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-16T15:39:15Z |
---
pipeline_tag: image-classification
---
|
crlandsc/tiny-audio-diffusion-snares
|
crlandsc
| 2023-06-16T19:25:10Z | 3 | 1 | null |
[
"audio",
"diffusion",
"waveform diffusion",
"audio diffusion",
"unet",
"region:us"
] | null | 2023-06-10T15:20:00Z |
---
tags:
- audio
- diffusion
- waveform diffusion
- audio diffusion
- unet
---
# Model Card for tiny-audio-diffusion-snares
Snare drum model for tiny-audio-diffusion. Use with [tiny-audio-diffusion](https://github.com/crlandsc/tiny-audio-diffusion) repo to generate snare drum samples.
|
YoneShiro/SpaceInvadersNoFrameskip-v4
|
YoneShiro
| 2023-06-16T19:14:05Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T19:13:20Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 708.00 +/- 250.51
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga YoneShiro -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga YoneShiro -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga YoneShiro
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Boristss/modellunarlander
|
Boristss
| 2023-06-16T19:13:23Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T19:12:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.44 +/- 21.50
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
renatosramiro/ppo-LunarLander-v2
|
renatosramiro
| 2023-06-16T19:13:15Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T19:12:47Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 228.95 +/- 35.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
digiplay/majicMIX_realistic_v5preview
|
digiplay
| 2023-06-16T18:49:48Z | 397 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-14T13:09:24Z |
---
license: other
---
Very famous realistic beauty Model
Model info :
https://civitai.com/models/43331?modelVersionId=79068
Orginal Author's DEMO image :

|
Bodolaz/unit-1
|
Bodolaz
| 2023-06-16T18:46:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T18:45:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.33 +/- 43.84
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sambanovasystems/codegen-16B-mono-toolbench
|
sambanovasystems
| 2023-06-16T18:24:41Z | 16 | 5 |
transformers
|
[
"transformers",
"pytorch",
"codegen",
"text-generation",
"arxiv:2305.16504",
"arxiv:2203.13474",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-22T16:48:51Z |
---
license: bsd-3-clause
---
# codgen-16B-mono-toolbench
<!-- Provide a quick summary of what the model is/does. -->
codgen-16B-mono-toolbench is a 16 billion parameter model used for api based action generation. It is instruction tuned from [codegen-16B-mono](https://huggingface.co/Salesforce/codegen-16B-mono) on api based action generation datasets.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
- **Model type:** Language Model
- **Language(s):** English
- **License:** bsd-3-clause
- **Finetuned from model:** [codegen-16B-mono](https://huggingface.co/Salesforce/codegen-16B-mono)
### Basic Information
<!-- Provide the basic links for the model. -->
- **Paper**: [Link](https://arxiv.org/abs/2305.16504)
- **Github**: [link](https://github.com/sambanova/toolbench)
## Uses
<details>
<summary>Click to expand</summary>
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model is intended for commercial and research use.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
codgen-16B-mono-toolbench should NOT be used for purpose other than API based action generation.
</details>
---
## How to Get Started with the Model
<details>
<summary>Click to expand</summary>
### Loading in model with Huggingface
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/codegen-16B-mono-toolbench")
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/codegen-16B-mono-toolbench", device_map="auto", torch_dtype="auto")
```
### Suggested Inference Parameters
- do_sample: False
### Example Prompts To Try in GPU Tutorial
Prompt 1:
```
I have the following set of API:\n\n# To set the maximum commute time in minute to your office location, assuming the office location is already defined\nAPI.set_max_commute_time(value: int)\n\n# To set the maximum home size in square feet\nAPI.set_max_square_feet(value: int)\n\n# To set the minimum home price in dollars\nAPI.set_min_price(value: int)\n\n# To set the number of garage(s)\nAPI.set_num_garages(value: int)\n\n# To set home types for search. For home buying, home_types choices are: \"House\", \"Townhouse\", \"Condo\", \"Land\", \"Multi-family\", \"Mobile\", \"Co-op\"; for home renting, home_types choices are: \"House\", \"Townhouse\", \"Condo\", \"Apartment\".\nAPI.select_home_type(home_types: List[str])\n\n# To set the number of balconies\nAPI.set_num_balconies(value: int)\n\n# Submit criterion to get search results. This function should be called after setting all the criterion.\nAPI.search()\n\n# To set the floor number\nAPI.set_floor_number(value: int)\n\n# To set the number of bedroom(s)\nAPI.set_num_beds(value: int)\n\n# To set the number of swimming pool(s)\nAPI.set_num_swimming_pools(value: int)\n\n# To set the maximum home price in dollars\nAPI.set_max_price(value: int)\n\n# To specify whether to search homes for buying or renting. 'value' can be chosen from ['buy', 'rent']. This function must be called after setting the location and before setting any other criteria.\nAPI.set_buy_or_rent(value: str)\n\n# To set the number of bathroom(s)\nAPI.set_num_baths(value: float)\n\n# To set the location for the search area. This function must be called before setting any criteria.\nAPI.set_location(value: string)\n\n# To set the minimum home size in square feet\nAPI.set_min_square_feet(value: int)\n\n-------------\n\nTask: Looking for homes to rent in Santa Clarita with a price range between $110000 and $1753000, a minimum of 1700 square feet, at least 2 balconies, and 3.5 bathrooms.\nAction:\n
```
Prompt 2:
```
I have the following set of API:\n\n# To set the location for hotel search, given a Loc object. This function must be called if booking type is 'hotels' or 'both'.\nAPI.set_hotel_location(Loc)\n\n# To set the number of hotel rooms to book.\nAPI.set_num_rooms(value)\n\n# To set the location for departure, given a Loc object. This function must be called if booking type is 'trip tickets' or 'both'.\nAPI.set_origin(Loc)\n\n# To select the transportation type from ['flight', 'train', 'bus', 'cruise']. This function must be called if booking type is 'trip tickets' or 'both'.\nAPI.select_transportation(transportation_type)\n\n# To set the return date of the trip, given a Date object. If booking type is 'both' and this function is not called explicitly, 'return_date' will be set to 'hotel_checkout_date' implicitly.\nAPI.set_return_date(Date)\n\n# To set the hotel check-in date, given a Date object. This function must be called if booking type is 'hotels' or 'both'.\nAPI.set_checkin_date(Date)\n\n# To define a date.\ndate = Date(month, day, year)\n\n# To set the departure date of the trip, given a Date object. This function must be called if booking type is 'trip tickets'. If booking type is 'both' and this function is not called explicitly, 'departure_date' will be set to 'hotel_checkin_date' implicitly.\nAPI.set_departure_date(Date)\n\n# To set the location for arrival, given a Loc object. This function must be called if booking type is 'trip tickets' or 'both'.\nAPI.set_destination(Loc)\n\n# To define a location of a given city 'City'.\nlocation = Loc('City')\n\n# To set maximum hotel room price.\nAPI.set_max_room_price(value)\n\n# To set minimum ticket price.\nAPI.set_min_ticket_price(value)\n\n# To select the booking type from ['hotels', 'trip tickets', 'both']. This function must be called before setting any criteria.\nAPI.select_booking_type(booking_type)\n\n# To set minimum hotel room price.\nAPI.set_min_room_price(value)\n\n# To set the number of child tickets to purchase.\nAPI.set_num_children(value)\n\n# To set the number of adult tickets to purchase.\nAPI.set_num_adults(value)\n\n# To select the hotel room type from ['King Bed', 'Queen Bed', 'Double', 'Luxury'].\nAPI.select_room_type(room_type)\n\n# To set maximum ticket price.\nAPI.set_max_ticket_price(value)\n\n# Submit criterion to get search results. This function should be called after setting all the criterion.\nAPI.search()\n\n# To set the hotel check-out date, given a Date object. This function must be called if booking type is 'hotels' or 'both'.\nAPI.set_checkout_date(Date)\n\n-------------\n\nTask: Looking to book 2 adult and 4 child tickets from Stockton to Baltimore by cruise, on 2023-07-29.\nAction:\n
```
</details>
---
## Training Details
<details>
<summary>Click to expand</summary>
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The training data is curated for the 8 tasks in ToolBench. See Appendix A of the [paper](https://arxiv.org/abs/2305.16504) for task details and Appendix C.1 for the training data curation details. In total, there are 9704 training samples, organized in all-shot format as described in Appendix C.2. Here is the [download link](https://drive.google.com/file/d/1lUatLGnSVhfy1uVIPEQ7qCoLtnCIXi2O/view?usp=sharing) to the training data.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We trained codegen-16b-mono-toolbench on 4 80GB A100 gpu's. We started from [codegen-16B-mono](https://huggingface.co/Salesforce/codegen-16B-mono) and finetuned it on the dataset mentioned above.
### Hyperparameters
- Hardware: A100 GPU
- Optimizer: AdamW
- Grad accumulation: 1
- Epochs: 8
- Global Batch size: 16
- Batch tokens: 16 * 2048 = 32,768 tokens
- Learning Rate: 1e-5
- Learning Rate Scheduler: Fixed LR
- Weight decay: 0.1
</details>
## Acknowledgment
We would like to express our gratitude to the great work done in [CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis](https://arxiv.org/abs/2203.13474)
## Cite codegen-16b-mono-toolbench
```
@misc{xu2023tool,
title={On the Tool Manipulation Capability of Open-source Large Language Models},
author={Qiantong Xu and Fenglu Hong and Bo Li and Changran Hu and Zhengyu Chen and Jian Zhang},
year={2023},
eprint={2305.16504},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ugiugi/inisw08-RoBERT-mlm-adamw_torch_bs8
|
ugiugi
| 2023-06-16T18:01:51Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-16T03:23:35Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: inisw08-RoBERT-mlm-adamw_torch_bs8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# inisw08-RoBERT-mlm-adamw_torch_bs8
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4931
- Accuracy: 0.3551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Satish678/UIED
|
Satish678
| 2023-06-16T17:57:21Z | 154 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-06-15T06:57:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: UIED
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UIED
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Trisert/outputs
|
Trisert
| 2023-06-16T17:43:33Z | 0 | 0 | null |
[
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-16T17:42:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [togethercomputer/RedPajama-INCITE-Base-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Ahatsham/flan-t5-small-imdb-text-classification
|
Ahatsham
| 2023-06-16T17:29:33Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-16T14:54:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: flan-t5-small-imdb-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-imdb-text-classification
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
LarryAIDraw/chara_GrandBlue_KotegawaNanaka_v1
|
LarryAIDraw
| 2023-06-16T17:24:31Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T17:17:05Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/90866/kotegawa-nanaka-or-grand-blue
|
LarryAIDraw/mitsuki_nase-07
|
LarryAIDraw
| 2023-06-16T17:23:09Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T17:13:47Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/90270/mitsuki-nase-kyoukai-no-kanata
|
LarryAIDraw/ganbaredouki-chan_douki-chan-11
|
LarryAIDraw
| 2023-06-16T17:20:27Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T17:11:48Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/89745/douki-chan-or-do-your-best-doki-chan
|
michaelfeil/ct2fast-falcon-7b-sft-top1-696
|
michaelfeil
| 2023-06-16T17:08:35Z | 7 | 3 |
transformers
|
[
"transformers",
"ctranslate2",
"int8",
"float16",
"sft",
"text-generation",
"en",
"de",
"es",
"fr",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T20:13:56Z |
---
license: apache-2.0
language:
- en
- de
- es
- fr
tags:
- ctranslate2
- int8
- float16
- sft
pipeline_tag: text-generation
widget:
- text: >-
<|prompter|>What is a meme, and what's the history behind this
word?<|endoftext|><|assistant|>
- text: <|prompter|>What's the Earth total population<|endoftext|><|assistant|>
- text: >-
<|prompter|>Write a story about future of AI
development<|endoftext|><|assistant|>
datasets:
- OpenAssistant/oasst1
library_name: transformers
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [OpenAssistant/falcon-7b-sft-top1-696](https://huggingface.co/OpenAssistant/falcon-7b-sft-top1-696)
```bash
pip install hf-hub-ctranslate2>=2.10.0 ctranslate2>=3.16.0
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-falcon-7b-sft-top1-696"
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.10.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-06-16 using
```
ct2-transformers-converter --model OpenAssistant/falcon-7b-sft-top1-696 --output_dir ~/tmp-ct2fast-falcon-7b-sft-top1-696 --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization int8_float16 --trust_remote_code
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# Open-Assistant Falcon 7B SFT OASST-TOP1 Model
This model is a fine-tuning of TII's [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) LLM.
It was trained with 11,123 top-1 (high-quality) demonstrations of the OASST data set (exported on June 2, 2023) with a batch size of 128 for 8 epochs with LIMA style dropout (p=0.2) and a context-length of 2048 tokens.
## Model Details
- **Finetuned from:** [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- **Weights & Biases:** [Training log](https://wandb.ai/open-assistant/public-sft/runs/25apbcld) (Checkpoint: 696 steps)
- **Code:** [Open-Assistant/model/model_training](https://github.com/LAION-AI/Open-Assistant/tree/main/model/model_training)
- **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-05_OpenAssistant_falcon-7b-sft-top1-696_sampling_noprefix2.json)
- **License:** Apache 2.0
- **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
## Prompting
Two special tokens are used to mark the beginning of user and assistant turns:
`<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
Input prompt example:
```
<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
```
The input ends with the `<|assistant|>` token to signal that the model should
start generating the assistant reply.
## Sample Code
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "OpenAssistant/falcon-7b-sft-top1-696"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
input_text="<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>"
sequences = pipeline(
input_text,
max_length=500,
do_sample=True,
return_full_text=False,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Configuration Details
Model:
```
falcon-7b:
dtype: bf16
log_dir: "falcon_log_7b"
learning_rate: 1e-5
model_name: "tiiuae/falcon-7b"
deepspeed_config: configs/zero_config.json
output_dir: falcon
weight_decay: 0.0
max_length: 2048
save_strategy: steps
eval_steps: 80
save_steps: 80
warmup_steps: 20
gradient_checkpointing: true
gradient_accumulation_steps: 4
per_device_train_batch_size: 4
per_device_eval_batch_size: 8
num_train_epochs: 8
save_total_limit: 4
residual_dropout: 0.2
residual_dropout_lima: true
```
Dataset:
```
oasst-top1:
# oasst_export: 11123 (100.00%)
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0
input_file_path: 2023-06-02_oasst_all_labels.jsonl.gz
val_split: 0.05
top_k: 1
```
Train command:
```
deepspeed trainer_sft.py --configs defaults falcon-7b oasst-top1 --cache_dir <data_cache_dir> --output_dir <output_path> --deepspeed
```
Export command:
```
python export_model.py --dtype bf16 --hf_repo_name OpenAssistant/falcon-7b-sft-top1 --trust_remote_code --auth_token <auth_token> <output_path> --max_shard_size 2GB
```
|
ABrinkmann/deberta-v3-large-finetuned-ner-10epochs-V2
|
ABrinkmann
| 2023-06-16T16:58:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-12T17:07:26Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: deberta-v3-large-finetuned-ner-10epochs-V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-finetuned-ner-10epochs-V2
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0941
- Precision: 0.9139
- Recall: 0.9358
- F1: 0.9248
- Accuracy: 0.9856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0539 | 1.0 | 2261 | 0.0571 | 0.8751 | 0.9304 | 0.9019 | 0.9822 |
| 0.0481 | 2.0 | 4522 | 0.0515 | 0.8794 | 0.9387 | 0.9081 | 0.9833 |
| 0.0362 | 3.0 | 6783 | 0.0502 | 0.8956 | 0.9336 | 0.9142 | 0.9841 |
| 0.0341 | 4.0 | 9044 | 0.0456 | 0.9097 | 0.9301 | 0.9198 | 0.9856 |
| 0.0272 | 5.0 | 11305 | 0.0520 | 0.9005 | 0.9451 | 0.9223 | 0.9860 |
| 0.0214 | 6.0 | 13566 | 0.0583 | 0.9069 | 0.9330 | 0.9197 | 0.9855 |
| 0.0162 | 7.0 | 15827 | 0.0684 | 0.9154 | 0.9259 | 0.9206 | 0.9854 |
| 0.0129 | 8.0 | 18088 | 0.0736 | 0.9158 | 0.9339 | 0.9248 | 0.9854 |
| 0.0074 | 9.0 | 20349 | 0.0869 | 0.9091 | 0.9355 | 0.9221 | 0.9854 |
| 0.0049 | 10.0 | 22610 | 0.0941 | 0.9139 | 0.9358 | 0.9248 | 0.9856 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
deepgoyal19/lora_tb
|
deepgoyal19
| 2023-06-16T16:46:16Z | 4 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-15T20:12:19Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - deepgoyal19/lora_tb
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.
|
grantprice/pythia-410m-deduped-finetuned-Critical-Role
|
grantprice
| 2023-06-16T16:27:08Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-15T19:15:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: pythia-410m-deduped-finetuned-Critical-Role
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-410m-deduped-finetuned-Critical-Role
This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6559 | 1.0 | 1268 | 2.8587 |
| 2.1501 | 2.0 | 2536 | 2.8971 |
| 1.6996 | 3.0 | 3804 | 3.0807 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Brendar/MaBePa_STS
|
Brendar
| 2023-06-16T16:02:40Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"code",
"fill-mask",
"es",
"dataset:xnli",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-15T14:27:17Z |
---
datasets:
- xnli
language:
- es
library_name: transformers
pipeline_tag: fill-mask
tags:
- code
---
Introducción
El objetivo del presente trabajo es generar un modelo para identificar la similitud semántica entre dos oraciones (o Semantic Textual Similarity: “STS”), es decir, medir qué tan parecidos son dos documentos. Dicho modelo será a través de una red neuronal siamesa, que implica usar la misma red, con idénticos parámetros, para procesar la premisa y la hipótesis.
“La tarea STS está motivada por la observación de que modelar con precisión la similitud de significado de las oraciones es un problema fundamental de comprensión del lenguaje relevante para numerosas aplicaciones, incluyendo: traducción automática (MT), resumen, generación, pregunta respuesta (QA), calificación de respuestas cortas, semántica, sistemas de búsqueda, diálogo y conversación.” (Cera Et al, 2017, p. 1).
Datos
El dataset elegido fue XNLI en español. El mismo contiene los campos de 'premise', 'hypothesis' y 'label’, donde los dos primeros campos son oraciones o cadenas de texto mientras que el tercero es la similitud semántica entre ambas con la siguiente codificación: 'entailment': 0, 'neutral': 1, 'contradiction': 2
El mismo está compuesto por tres dataset:
TRAINING, con 392.702 datos;
TEST, con 5.010 datos;
VALIDATION, con 2.490 datos.
Además se utiliza un vocabulario en español que contiene alrededor de 31.000 palabras, incluyendo los siguientes caracteres especiales: "[MASK]", "[PAD]", "[EOS]","[UNK]","[CLS]","[SEP]" que se encuentran en las primeras posiciones del vocabulario.
Dicho vocabulario surge del modelo de Huggigface cuyo model_name es "dccuchile/bert-base-spanish-wwm-uncased".
Método
Tokenización
En primer lugar importamos AutoTokenizer y obtenemos el tokenizador del modelo definido anteriormente. El mismo, al tokenizar adicionalmente de convertir los tokens o palabras en su ID del vocabulario, le incorpora al inicio el id del carácter especial “CLS” y al final el “SEP”.
Además fijamos como parámetro la longitud máxima del modelo (tokenizer.model_max_length) esto genera que corte las premisas y las hipotesis si son mas largas y que las complete con padding si son mas cortas hasta completar la longitud deseada (con “PAD”)..
Notamos que este tokenizador contiene funciones como las del itos y el stoi ya generadas.
Procedemos a tokenizar el dataset a utilizando la función map, tanto para la premisa como la hipótesis.
Armado de Batches
Con la tokenizacion realizada procedemos a separar los Batches, para lo cual usamos el dataloader de torch.
El resultado serán batches de tamaño 32 para el dataset de train, y 16 tanto para el de validación como para el de test. Sus dimensiones son el tamaño de cada batch x la cantidad de elementos. En el caso de la premisa y la hipótesis la cantidad de elementos será el largo utilizado para la tokenización mientras que en el caso del label, al ser único, la dimension será del tamaño del batch x 1.
Asimismo incorporamos a los batches el attention_mask de la premisa y de la hipótesis.
Modelo base
BERT es una red pre-entrenada de transformadores (...). El input de BERT consiste en las dos oraciones separadas por un token especial [SEP]. (...) yla salida se pasa a una función de regresión simple para derivar la etiqueta final. (Reimers and Gurevich, 2019, p. 2).
Sobre este modelo base, se realizó el finetuning de nuestra red, basándonos en el siguiente diagrama (Reimers and Gurevich, 2019, p. 3).:
Es decir que lo que haremos será pasar la premisa y la hipótesis por una BERT, obteniendo luego un pooler output para cada una de ellas (“u” y “v”). Luego se concatenan, junto con el módulo de la diferencia, y ese resultado es pasado por la capa lineal obteniendo 3 resultados, que serán las probabilidades asociadas a cada label.
Se procedió a entrenar la red con el dataset de train, utilizando como función de pérdida la entropía cruzada, y luego se procedió a validar el modelo. Los resultados se exponen en la próxima sección.
|
BWayne/LILT_mr
|
BWayne
| 2023-06-16T15:53:32Z | 92 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"lilt",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-28T15:25:39Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: LILT_on7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LILT_on7
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Able caption: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2}
- Eading: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 62}
- Ext: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 102}
- Mage caption: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 13}
- Ub heading: {'precision': 0.2642706131078224, 'recall': 1.0, 'f1': 0.41806020066889626, 'number': 125}
- Overall Precision: 0.2643
- Overall Recall: 0.4112
- Overall F1: 0.3218
- Overall Accuracy: 0.2643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Able caption | Eading | Ext | Mage caption | Ub heading | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------:|:----------------------------------------------------------:|:-----------------------------------------------------------:|:----------------------------------------------------------:|:------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.0142 | 0.44 | 500 | nan | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 62} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 102} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 13} | {'precision': 0.2642706131078224, 'recall': 1.0, 'f1': 0.41806020066889626, 'number': 125} | 0.2643 | 0.4112 | 0.3218 | 0.2643 |
| 1.0228 | 0.89 | 1000 | nan | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 62} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 102} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 13} | {'precision': 0.2642706131078224, 'recall': 1.0, 'f1': 0.41806020066889626, 'number': 125} | 0.2643 | 0.4112 | 0.3218 | 0.2643 |
| 1.0299 | 1.33 | 1500 | nan | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 62} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 102} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 13} | {'precision': 0.2642706131078224, 'recall': 1.0, 'f1': 0.41806020066889626, 'number': 125} | 0.2643 | 0.4112 | 0.3218 | 0.2643 |
| 1.0233 | 1.78 | 2000 | nan | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 62} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 102} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 13} | {'precision': 0.2642706131078224, 'recall': 1.0, 'f1': 0.41806020066889626, 'number': 125} | 0.2643 | 0.4112 | 0.3218 | 0.2643 |
| 0.9924 | 2.22 | 2500 | nan | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 62} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 102} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 13} | {'precision': 0.2642706131078224, 'recall': 1.0, 'f1': 0.41806020066889626, 'number': 125} | 0.2643 | 0.4112 | 0.3218 | 0.2643 |
| 1.0081 | 2.67 | 3000 | nan | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 62} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 102} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 13} | {'precision': 0.2642706131078224, 'recall': 1.0, 'f1': 0.41806020066889626, 'number': 125} | 0.2643 | 0.4112 | 0.3218 | 0.2643 |
| 0.9836 | 3.11 | 3500 | nan | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 62} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 102} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 13} | {'precision': 0.2642706131078224, 'recall': 1.0, 'f1': 0.41806020066889626, 'number': 125} | 0.2643 | 0.4112 | 0.3218 | 0.2643 |
| 0.9997 | 3.56 | 4000 | nan | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 62} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 102} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 13} | {'precision': 0.2642706131078224, 'recall': 1.0, 'f1': 0.41806020066889626, 'number': 125} | 0.2643 | 0.4112 | 0.3218 | 0.2643 |
| 0.984 | 4.0 | 4500 | nan | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 62} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 102} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 13} | {'precision': 0.2642706131078224, 'recall': 1.0, 'f1': 0.41806020066889626, 'number': 125} | 0.2643 | 0.4112 | 0.3218 | 0.2643 |
| 0.9889 | 4.44 | 5000 | nan | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 62} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 102} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 13} | {'precision': 0.2642706131078224, 'recall': 1.0, 'f1': 0.41806020066889626, 'number': 125} | 0.2643 | 0.4112 | 0.3218 | 0.2643 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AustinCarthy/OnlyPhishGPT2_subdomain_100KP_BFall_fromB_90K_topP_0.75_ratio2.63
|
AustinCarthy
| 2023-06-16T15:48:15Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-16T13:35:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: OnlyPhishGPT2_subdomain_100KP_BFall_fromB_90K_topP_0.75_ratio2.63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OnlyPhishGPT2_subdomain_100KP_BFall_fromB_90K_topP_0.75_ratio2.63
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_OnlyPhishGPT2_using_benigh_200K_top_p_0.75 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0187
- Accuracy: 0.9980
- F1: 0.9791
- Precision: 0.9967
- Recall: 0.9622
- Roc Auc Score: 0.9810
- Tpr At Fpr 0.01: 0.96
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0082 | 1.0 | 21554 | 0.0150 | 0.9968 | 0.9658 | 0.9964 | 0.937 | 0.9684 | 0.9284 |
| 0.0048 | 2.0 | 43108 | 0.0103 | 0.9979 | 0.9772 | 0.9944 | 0.9606 | 0.9802 | 0.9442 |
| 0.0025 | 3.0 | 64662 | 0.0157 | 0.9980 | 0.9788 | 0.9952 | 0.9628 | 0.9813 | 0.9552 |
| 0.0012 | 4.0 | 86216 | 0.0177 | 0.9979 | 0.9774 | 0.9979 | 0.9578 | 0.9789 | 0.9562 |
| 0.0 | 5.0 | 107770 | 0.0187 | 0.9980 | 0.9791 | 0.9967 | 0.9622 | 0.9810 | 0.96 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jiayanli/loan_classifier
|
jiayanli
| 2023-06-16T15:32:14Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-16T15:31:51Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# jiayanli/loan_classifier
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("jiayanli/loan_classifier")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
minhtoan/roberta-masked-lm-vietnamese-nom
|
minhtoan
| 2023-06-16T15:01:53Z | 105 | 3 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"vi",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-24T14:58:33Z |
---
language:
- vi
pipeline_tag: fill-mask
widget:
- text: '<mask> 仍 𠎬 英 䧺 淑 女'
---
# Pre-trained Masked Language Model for Vietnamese Nôm
A masked language model for Nôm script is a specialized version of a language model designed to understand and generate text in the Chữ Nôm script. Chữ Nôm is a logographic writing system used in Vietnam from the 13th to the early 20th century, primarily before the introduction of the Latin-based Vietnamese script.
Similar to other masked language models, such as GPT-3, the Chữ Nôm masked language model is trained on a large corpus of Chữ Nôm texts. This training data helps the model learn the statistical patterns, contextual relationships, and semantic meanings of characters and words in the Chữ Nôm script.
Model was trained on some literary works and poetry: Bai ca ran co bac, Buom hoa tan truyen, Chinh phu ngam, Gia huan ca, Ho Xuan Huong, Luc Van Tien, Tale of Kieu-1870, Tale of Kieu 1871, Tale of kieu 1902,...
# How to use the model
~~~~
from transformers import RobertaTokenizerFast, RobertaForMaskedLM
import torch
# Load the tokenizer
tokenizer = RobertaTokenizerFast.from_pretrained('minhtoan/roberta-masked-lm-vietnamese-nom')
# Load the model
model = RobertaForMaskedLM.from_pretrained('minhtoan/roberta-masked-lm-vietnamese-nom')
text = '<mask>如㗂䳽𠖤戈'
inputs = tokenizer(text, return_tensors="pt")
mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1]
logits = model(**inputs).logits
mask_token_logits = logits[0, mask_token_index, :]
print("Predicted word:", tokenizer.decode(mask_token_logits[0].argmax()))
~~~~
## Author
Phan Minh Toan
|
kejolong/mizuki2.0
|
kejolong
| 2023-06-16T14:39:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T14:37:05Z |
---
license: creativeml-openrail-m
---
|
AXX1995/gebianv1
|
AXX1995
| 2023-06-16T14:32:40Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T14:31:10Z |
---
license: creativeml-openrail-m
---
|
luischir/bert-base-spanish-wwm-uncased-finetuned-squad
|
luischir
| 2023-06-16T14:30:56Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-15T22:01:09Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-spanish-wwm-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-uncased-finetuned-squad
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 200 | 1.4234 |
| No log | 2.0 | 400 | 1.2396 |
| 1.3232 | 3.0 | 600 | 1.2504 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Xuanlong/MUAD_DeepLabmodel
|
Xuanlong
| 2023-06-16T14:12:58Z | 0 | 0 | null |
[
"arxiv:2203.01437",
"license:afl-3.0",
"region:us"
] | null | 2023-06-16T13:11:46Z |
---
license: afl-3.0
---
## DeepLab v3 plus - ResNet101 model trained on MUAD dataset
This is a DeepLab v3 plus model with ResNet101 backbone trained on the MUAD dataset. The training is based on PyTorch.
MUAD is a synthetic dataset with multiple uncertainties for autonomous driving [[Paper]](https://arxiv.org/abs/2203.01437) [[Website]](https://muad-dataset.github.io/) [[Github]](https://github.com/ENSTA-U2IS/MUAD-Dataset).
### ICCV UNCV 2023 | MUAD challenge
MUAD challenge is now on board on the Codalab platform for uncertainty estimation in semantic segmentation. This challenge is hosted in conjunction with the [ICCV 2023](https://iccv2023.thecvf.com/) workshop, [Uncertainty Quantification for Computer Vision (UNCV)](https://uncv2023.github.io/). Go and have a try! 🚀 🚀 🚀 [[Challenge link]](https://codalab.lisn.upsaclay.fr/competitions/8007)
### Reference
If you find this work useful for your research, please consider citing our paper:
```
@inproceedings{franchi22bmvc,
title = {MUAD: Multiple Uncertainties for Autonomous Driving benchmark for multiple uncertainty types and tasks},
author = {Gianni Franchi and Xuanlong Yu and Andrei Bursuc and Angel Tena and Rémi Kazmierczak and Severine Dubuisson and Emanuel Aldea and David Filliat},
booktitle = {33rd British Machine Vision Conference, {BMVC}},
year = {2022}
}
```
```
@inproceedings{deeplabv3plus2018,
title = {Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation},
author = {Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam},
booktitle = {ECCV},
year = {2018}
}
```
### Copyright
Copyright for MUAD Dataset is owned by Université Paris-Saclay (SATIE Laboratory, Gif-sur-Yvette, FR) and ENSTA Paris (U2IS Laboratory, Palaiseau, FR).
|
KBLab/bart-base-swedish-cased
|
KBLab
| 2023-06-16T14:08:56Z | 129 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"sv",
"arxiv:1910.13461",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
language: sv
widget:
- text: "Jag har ätit en <mask>"
---
## KB-BART
A [BART](https://arxiv.org/abs/1910.13461) model trained on a Swedish corpus consisting of 15 billion tokens (about 80GB of text). The model was trained with [Fairseq](https://github.com/pytorch/fairseq), and converted to be compatible with Huggingface.
Training code can be found [here](https://github.com/kb-labb/kb_bart).
## Usage
```python
from transformers import BartForConditionalGeneration, PreTrainedTokenizerFast, AutoTokenizer
model = BartForConditionalGeneration.from_pretrained("KBLab/bart-base-swedish-cased")
tok = AutoTokenizer.from_pretrained("KBLab/bart-base-swedish-cased")
model.eval()
input_ids = tok.encode(
"Jag har ätit en utsökt <mask> på restaurang vid <mask> .", return_tensors="pt"
)
# Simple greedy search
output_ids = model.generate(
input_ids,
min_length=15,
max_length=25,
num_beams=1,
do_sample=False,
)
tok.decode(output_ids[0])
# '</s><s> Jag har ätit en utsökt middag på restaurang vid havet på restaurang vid havet på restaurang vid havet.</s>'
# Sampling
output_ids = model.generate(
input_ids,
min_length=15,
max_length=20,
num_beams=1,
do_sample=True,
)
tok.decode(output_ids[0])
#'</s><s> Jag har ätit en utsökt god mat som de tagit in på restaurang vid avröjda</s>'
# Beam search
output_ids = model.generate(
input_ids,
min_length=15,
max_length=25,
no_repeat_ngram_size=3,
num_beams=8,
early_stopping=True,
do_sample=True,
num_return_sequences=6
)
tok.decode(output_ids[0])
# '</s><s> Jag har ätit en utsökt middag på restaurang vid havet. Jag har varit ute och gått en sväng.</s><pad><pad>'
# Diverse beam generation
output_ids = model.generate(
input_ids,
min_length=50,
max_length=100,
no_repeat_ngram_size=3,
num_beams=8,
early_stopping=True,
do_sample=False,
num_return_sequences=6,
num_beam_groups=8,
diversity_penalty=2.0,
)
tok.decode(output_ids[0])
# '</s><s> Jag har ätit en utsökt middag på restaurang vid havet på restaurang. Jag har varit på restaurang i två dagar... Jag..,..!!!.. Så.. Nu.. Hej.. Vi.. Här.</s>'
```
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium ([www.hpc-rivr.si](https://www.hpc-rivr.si/)) and EuroHPC JU ([eurohpc-ju.europa.eu/](https://eurohpc-ju.europa.eu/)) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science ([www.izum.si](https://www.izum.si/)).
|
heack/HeackMT5-ZhCleanText1ML
|
heack
| 2023-06-16T14:05:27Z | 110 | 11 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-15T09:45:13Z |
---
pipeline_tag: text2text-generation
---
# HeackMT5-ZhCleanText1ML: A Text Cleaning Model for Chinese Texts
This model, `heack/HeackMT5-ZhCleanText1ML`, is a fine-tuned mT5 model for Chinese text cleaning tasks. It is designed to remove gibberish, clean up the text, retain original information as much as possible, and does not process large sections of non-Chinese text (such as English text).
此模块,主要解决困扰中国互联网多年的乱码问题,同时借助Transformer大模型,可以对文字进行提炼(很少的情况下以及
模型非常确信的情况下),进行文字清理。你大可以相信此模型,它不会对你的文本进行任意的改动。对于非中文字符的文本,
本模型不做处理。
此模型基于100万行数据进行训练得到,训练结果:
| step | epoch | learning_rate | loss | eval_loss
|------|-------|------------------------|--------|--------
| 129000 | 3.73 | 1e-05 | 1.714 | 1.706
## Model Details
- Model: mT5
- Language: Chinese (multiple languages supported)
## Usage
Here is how you can use this model for text cleaning:
```python
from transformers import MT5ForConditionalGeneration, T5Tokenizer
model = MT5ForConditionalGeneration.from_pretrained("heack/HeackMT5-ZhCleanText1ML")
tokenizer = T5Tokenizer.from_pretrained("heack/HeackMT5-ZhCleanText1ML")
text = """
大众汽车集团在第五届中国国际进口博览会携旗下大众汽车品牌、奥灶液弊胀演蹂穷蹭齿港呛奸怀甫磁洒暮烂犁投迪品牌和保时捷品牌亮相,共展出5款纯电动车
型。其中,大众汽车役络观示惑觉髓品牌展出了ID.家族最新成员——ID.AERO概念车,将于2023年上市;奥迪展出了两款豪华运动纯电动车奥迪RS e-tro???Mission GT和首款“Roadjet
陆地专机”奥迪Q5e-t��������Ʒ�2022��ף��µϽron。到2022年底,奥迪将在中国D��������市场提供7款新能源车型。保时捷则展出了两款纯电动车,其中保时捷Mission R概念车为亚洲首秀。保时捷将进一步在电气化领域持续发力,大量创新技
术萤恒扔剪秆仁忙殃掉雄停遵冒姑只脸玉匣有望应用于未来的量产车中,包括全新的电池组和冷����������却系统等。“自2015年以来,中国在智能汽车领域已逐渐在世界上领先。在自动驾驶领域,没有其他国家的技术创新和实施速度现在能够超越中国。”大众汽车集d
团执行副总裁刘云峰说,他指出,中德双方的务实合作广泛而深入,其中经贸合作发挥了压舱石作鑳藉寲杞�鍨嬬殑涓绘垬鍦轰箣涓�銆用,特别是在掏傻汽车行业。大众汽车集团有关人士介绍,大众正积极主动地推进转型,创新求变,oYFb而中国是大众汽车向电动化和交智能化
转型的主战场之一。除了代表大众迄柑居昧懦汽车电动化攻势的多款纯电车型和创新技术外,大众汽车还在本届进博<script会通过互动形式展示了旗下软件公司CARIAD的最新软件研发成果。按计划,在中国,大众汽车品牌ID.家族浴屋??????????????聂日票绢缀郁硼魏挖两
裙快温屎棠虐惨遇的产品阵容将拓展至纯电中型轿车细分市场。
"""
inputs = tokenizer("filter:"+text, return_tensors="pt")
outputs = model.generate(inputs.input_ids, max_new_tokens=512)
filtered_text = tokenizer.decode(outputs[0], skip_special_tokens=True, num_beams=4, length_penalty=0.8)
print(filtered_text)
======================
"""
大众汽车集团在第五届中国国际进口博览会携旗下大众汽车品牌、奥迪品牌和保时捷品牌亮相,共展出5款纯电动车
型。其中,大众汽车品牌展出了ID.家族最新成员——ID.AERO概念车,将于2023年上市;奥迪展出了两款豪华运动纯电动车奥迪RS e-tronMission GT和首款“Roadjet
陆地专机”奥迪Q5e-tron。到2022年底,奥迪将在中国市场提供7款新能源车型。保时捷则展出了两款纯电动车,其中保时捷Mission R概念车为亚洲首秀。保时捷将进一步在电气化领域持续发力,大量创新技
术有望应用于未来的量产车中,包括全新的电池组和冷却系统等。“自2015年以来,中国在智能汽车领域已逐渐在世界上领先。在自动驾驶领域,没有其他国家的技术创新和实施速度现在能够超越中国。”大众汽车集
团执行副总裁刘云峰说,他指出,中德双方的务实合作广泛而深入,其中经贸合作发挥了压舱石作用,特别是在汽车行业。大众汽车集团有关人士介绍,大众正积极主动地推进转型,创新求变,而中国是大众汽车向电动化和交智能化
转型的主战场之一。除了代表大众汽车电动化攻势的多款纯电车型和创新技术外,大众汽车还在本届进博会通过互动形式展示了旗下软件公司CARIAD的最新软件研发成果。按计划,在中国,大众汽车品牌ID.家族的产品阵容将拓展至纯电中型轿车细分市场。
"""
```
## For long text(more than 512 tokens)
```python
from transformers import MT5ForConditionalGeneration, T5Tokenizer
def split_text(text, tokenizer, length):
chunks = []
chunk = ""
for char in text:
chunk = chunk + char
if len(tokenizer.encode(chunk, truncation=False)) >= length:
if char in {'.', '。', ',', ',', '\n'}:
chunks.append(chunk)
chunk = ""
else:
for i in range(1, 21):
if chunk[-i] in {'.', '。', ',', ',', '\n'}:
break
else:
i = 0
if i == 0:
chunks.append(chunk)
chunk = ""
else:
chunks.append(chunk[:-i])
chunk = chunk[-i:]
chunks.append(chunk)
assert "".join(chunks) == text
return chunks
def filter_luanma_text(text, model, tokenizer):
chunks = split_text(text, tokenizer,500)
filter_texts = []
for chunk in chunks:
inputs = tokenizer("filter:" + chunk, return_tensors="pt")
outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=500)
filter_text = tokenizer.decode(outputs[0], max_length=500, skip_special_tokens=True, num_beams=4, length_penalty=0.8)
filter_texts.append(filter_text)
return " ".join(filter_texts)
model = MT5ForConditionalGeneration.from_pretrained("heack/HeackMT5-ZhCleanText1ML")
tokenizer = T5Tokenizer.from_pretrained("heack/HeackMT5-ZhCleanText1ML")
filtered_text = filter_luanma_text("需要df过滤的文=本", model, tokenizer)
print(filtered_text)
======================================
"""
需要过滤的文本
"""
```
## Credits
This model is trained and maintained by KongYang from Shanghai Jiao Tong University. For any questions, please reach out to me at my WeChat ID: kongyang.
## License
This model is released under the CC BY-NC-SA 4.0 license.
## Citation
If you use this model in your research, please cite:
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{kongyang2023heackmt5ZhCleanText1ML,
title={heack/HeackMT5-ZhCleanText1ML: A Large-Scale Multilingual Abstractive Summarization for Chinese Texts},
author={Kong Yang},
year={2023}
}
|
bagassword21/myuta
|
bagassword21
| 2023-06-16T14:03:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T14:02:42Z |
---
license: creativeml-openrail-m
---
|
claraldk01/my_awesome_qa_model
|
claraldk01
| 2023-06-16T14:01:09Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-16T13:52:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 1.9985 |
| 2.6102 | 2.0 | 500 | 1.6297 |
| 2.6102 | 3.0 | 750 | 1.5851 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
sam34738/Indic
|
sam34738
| 2023-06-16T13:58:45Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T13:49:22Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Indic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Indic
This model is a fine-tuned version of [ai4bharat/indic-bert](https://huggingface.co/ai4bharat/indic-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9417
- Accuracy: 0.67
- F1: 0.6884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-05
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7105 | 1.0 | 2100 | 0.7070 | 0.6689 | 0.6928 |
| 0.7755 | 2.0 | 4200 | 0.9417 | 0.67 | 0.6884 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
OpenBuddy/openbuddy-13b-v1.3-fp16
|
OpenBuddy
| 2023-06-16T13:57:30Z | 7 | 5 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-05-23T16:48:50Z |
---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
---
# ⚠️ DEPRECATION WARNING ⚠️
This model is an outdated version and has been preserved specifically for evaluating differences between model versions.
We highly recommend visiting our GitHub repository to find and use the latest version of the model: https://github.com/OpenBuddy/OpenBuddy
# OpenBuddy - Open Multilingual Chatbot based on LLaMA
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)

## Installation
Due to licensing restrictions from LLAMA, you need to have the original LLAMA-7B model to decrypt the model weights.
To decrypt the model weights, please follow the guide in our GitHub: https://github.com/OpenBuddy/OpenBuddy#installation
## Disclaimer
OpenBuddy is provided as-is without any warranty of any kind, either express or implied. The authors and contributors shall not be held liable for any damages resulting from the use or inability to use this software. By using OpenBuddy, you agree to these terms and conditions.
## License Restrictions
OpenBuddy is intended for non-commercial research purposes only, following the same restrictions as the LLAMA model. Any use outside of this scope is strictly prohibited. For more information, please refer to the LLAMA license.
|
OpenBuddy/openbuddy-7b-v1.3-bf16
|
OpenBuddy
| 2023-06-16T13:57:13Z | 9 | 5 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-05-17T13:25:45Z |
---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
---
# ⚠️ DEPRECATION WARNING ⚠️
This model is an outdated version and has been preserved specifically for evaluating differences between model versions.
We highly recommend visiting our GitHub repository to find and use the latest version of the model: https://github.com/OpenBuddy/OpenBuddy
# OpenBuddy - Open Multilingual Chatbot based on LLaMA
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)

## Installation
Due to licensing restrictions from LLAMA, you need to have the original LLAMA-7B model to decrypt the model weights.
To decrypt the model weights, please follow the guide in our GitHub: https://github.com/OpenBuddy/OpenBuddy#installation
## Disclaimer
OpenBuddy is provided as-is without any warranty of any kind, either express or implied. The authors and contributors shall not be held liable for any damages resulting from the use or inability to use this software. By using OpenBuddy, you agree to these terms and conditions.
## License Restrictions
OpenBuddy is intended for non-commercial research purposes only, following the same restrictions as the LLAMA model. Any use outside of this scope is strictly prohibited. For more information, please refer to the LLAMA license.
|
studio-ousia/mluke-large
|
studio-ousia
| 2023-06-16T13:55:50Z | 178 | 1 |
transformers
|
[
"transformers",
"pytorch",
"luke",
"fill-mask",
"named entity recognition",
"relation classification",
"question answering",
"multilingual",
"ar",
"bn",
"de",
"el",
"en",
"es",
"fi",
"fr",
"hi",
"id",
"it",
"ja",
"ko",
"nl",
"pl",
"pt",
"ru",
"sv",
"sw",
"te",
"th",
"tr",
"vi",
"zh",
"arxiv:2010.01057",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- multilingual
- ar
- bn
- de
- el
- en
- es
- fi
- fr
- hi
- id
- it
- ja
- ko
- nl
- pl
- pt
- ru
- sv
- sw
- te
- th
- tr
- vi
- zh
thumbnail: https://github.com/studio-ousia/luke/raw/master/resources/luke_logo.png
tags:
- luke
- named entity recognition
- relation classification
- question answering
license: apache-2.0
---
## mLUKE
**mLUKE** (multilingual LUKE) is a multilingual extension of LUKE.
Please check the [official repository](https://github.com/studio-ousia/luke) for
more details and updates.
This is the mLUKE large model with 24 hidden layers, 768 hidden size. The total number
of parameters in this model is 868M (561M for the word embeddings and encoder, 307M for the entity embeddings).
The model was initialized with the weights of XLM-RoBERTa(large) and trained using December 2020 version of Wikipedia in 24 languages.
## Note
When you load the model from `AutoModel.from_pretrained` with the default configuration, you will see the following warning:
```
Some weights of the model checkpoint at studio-ousia/mluke-base-lite were not used when initializing LukeModel: [
'luke.encoder.layer.0.attention.self.w2e_query.weight', 'luke.encoder.layer.0.attention.self.w2e_query.bias',
'luke.encoder.layer.0.attention.self.e2w_query.weight', 'luke.encoder.layer.0.attention.self.e2w_query.bias',
'luke.encoder.layer.0.attention.self.e2e_query.weight', 'luke.encoder.layer.0.attention.self.e2e_query.bias',
...]
```
These weights are the weights for entity-aware attention (as described in [the LUKE paper](https://arxiv.org/abs/2010.01057)).
This is expected because `use_entity_aware_attention` is set to `false` by default, but the pretrained weights contain the weights for it in case you enable `use_entity_aware_attention` and have the weights loaded into the model.
### Citation
If you find mLUKE useful for your work, please cite the following paper:
```latex
@inproceedings{ri-etal-2022-mluke,
title = "m{LUKE}: {T}he Power of Entity Representations in Multilingual Pretrained Language Models",
author = "Ri, Ryokan and
Yamada, Ikuya and
Tsuruoka, Yoshimasa",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
year = "2022",
url = "https://aclanthology.org/2022.acl-long.505",
```
|
holmoy/setfit-model-93
|
holmoy
| 2023-06-16T13:42:45Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-16T13:42:39Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# holmoy/setfit-model-93
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("holmoy/setfit-model-93")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.